Linux-Utility
stringlengths
1
30
Manual-Page
stringlengths
700
948k
TLDR-Summary
stringlengths
110
2.05k
dnsdomainname
hostname(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training Another version of this page is provided by the coreutils project hostname(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | AUTHOR | COLOPHON HOSTNAME(1) Linux System Administrator's Manual HOSTNAME(1) NAME top hostname - show or set the system's host name dnsdomainname - show the system's DNS domain name domainname - show or set the system's NIS/YP domain name nisdomainname - show or set system's NIS/YP domain name nodename - show or set the system's DECnet node name ypdomainname - show or set the system's NIS/YP domain name SYNOPSIS top hostname [-v] [-s|--short] hostname [-v] [-a|--alias] [-d|--domain] [-f|--fqdn|--long] [-i|--ip-address] hostname [-v] [-y|--yp|--nis] [-n|--node] hostname [-v] [-F filename|--file filename] [newname] domainname [-v] [-F filename|--file filename] [newname] nodename [-v] [-F filename|--file filename] [newname] hostname [-v|--verbose] [-h|--help] [-V|--version] dnsdomainname [-v] nisdomainname [-v] ypdomainname [-v] DESCRIPTION top Hostname is the program that is used to either set or display the current host, domain or node name of the system. These names are used by many of the networking programs to identify the machine. The domain name is also used by NIS/YP. GET NAME When called without any arguments, the program displays the current names: hostname will print the name of the system as returned by the gethostname(2) function. domainname, nisdomainname, ypdomainname will print the name of the system as returned by the getdomainname(2) function. This is also known as the YP/NIS domain name of the system. nodename will print the DECnet node name of the system as returned by the getnodename(2) function. dnsdomainname will print the domain part of the FQDN (Fully Qualified Domain Name). The complete FQDN of the system is returned with hostname --fqdn. SET NAME When called with one argument or with the --file option, the commands set the host name, the NIS/YP domain name or the node name. Note, that only the super-user can change the names. It is not possible to set the FQDN or the DNS domain name with the dnsdomainname command (see THE FQDN below). The host name is usually set once at system startup by reading the contents of a file which contains the host name, e.g. /etc/hostname). THE FQDN You can't change the FQDN (as returned by hostname --fqdn) or the DNS domain name (as returned by dnsdomainname) with this command. The FQDN of the system is the name that the resolver(3) returns for the host name. Technically: The FQDN is the canonical name returned by gethostbyname2(2) when resolving the result of the gethostname(2) name. The DNS domain name is the part after the first dot. Therefore it depends on the configuration (usually in /etc/host.conf) how you can change it. If hosts is the first lookup method, you can change the FQDN in /etc/hosts. OPTIONS top -a, --alias Display the alias name of the host (if used). -d, --domain Display the name of the DNS domain (this is the FQDN without the segment up to the first dot). This is equivalent to using the dnsdomainname command. -F, --file filename Read the new host name from the specified file. Comments (lines starting with a `#') are ignored. -f, --fqdn, --long Display the FQDN (Fully Qualified Domain Name). A FQDN consists of name including the DNS domain. -h, --help Print a usage message and exit. -i, --ip-address Display the IP address(es) of the host. -n, --node Display the DECnet node name. If a parameter is given (or --file name ) the root can also set a new node name. -s, --short Display the short host name. This is the host name cut at the first dot. -V, --version Print version information on standard output and exit successfully. -v, --verbose Be verbose and tell what's going on. -y, --yp, --nis Display the NIS domain name. If a parameter is given (or --file name ) then root can also set a new NIS domain. FILES top /etc/hostname /etc/hosts /etc/host.conf AUTHOR top Peter Tobias, <tobias@et-inf.fho-emden.de> Bernd Eckenfels, <net-tools@lina.inka.de> (NIS and manpage). Steve Whitehouse, <SteveW@ACM.org> (DECnet support and manpage). COLOPHON top This page is part of the net-tools (networking utilities) project. Information about the project can be found at http://net-tools.sourceforge.net/. If you have a bug report for this manual page, see http://net-tools.sourceforge.net/. This page was obtained from the project's upstream Git repository git://git.code.sf.net/p/net-tools/code on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-06-29.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org net-tools 2013-08-29 HOSTNAME(1) Pages that refer to this page: hostnamectl(1), ippeveprinter(1), gethostname(2), cupsd.conf(5), hostname(5), hosts(5), proc(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# dnsdomainname\n\n> Show the system's DNS domain name.\n> Note: The tool uses `gethostname` to get the hostname of the system and then `getaddrinfo` to resolve it into a canonical name.\n> More information: <https://www.gnu.org/software/inetutils/manual/html_node/dnsdomainname-invocation.html>.\n\n- Show the system's DNS domain name:\n\n`dnsdomainname`\n
dot
dot(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dot(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT DOT(1P) POSIX Programmer's Manual DOT(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top dot execute commands in the current environment SYNOPSIS top . file DESCRIPTION top The shell shall execute commands from the file in the current environment. If file does not contain a <slash>, the shell shall use the search path specified by PATH to find the directory containing file. Unlike normal command search, however, the file searched for by the dot utility need not be executable. If no readable file is found, a non-interactive shell shall abort; an interactive shell shall write a diagnostic message to standard error, but this condition shall not be considered a syntax error. OPTIONS top None. OPERANDS top See the DESCRIPTION. STDIN top Not used. INPUT FILES top See the DESCRIPTION. ENVIRONMENT VARIABLES top See the DESCRIPTION. ASYNCHRONOUS EVENTS top Default. STDOUT top Not used. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top If no readable file was found or if the commands in the file could not be parsed, and the shell is interactive (and therefore does not abort; see Section 2.8.1, Consequences of Shell Errors), the exit status shall be non-zero. Otherwise, return the value of the last command executed, or a zero exit status if no command is executed. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top None. EXAMPLES top cat foobar foo=hello bar=world . ./foobar echo $foo $bar hello world RATIONALE top Some older implementations searched the current directory for the file, even if the value of PATH disallowed it. This behavior was omitted from this volume of POSIX.12017 due to concerns about introducing the susceptibility to trojan horses that the user might be trying to avoid by leaving dot out of PATH. The KornShell version of dot takes optional arguments that are set to the positional parameters. This is a valid extension that allows a dot script to behave identically to a function. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.14, Special Built-In Utilities, return(1p) COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 DOT(1P) Pages that refer to this page: return(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# dot\n\n> Render an image of a `linear directed` network graph from a `graphviz` file.\n> Layouts: `dot`, `neato`, `twopi`, `circo`, `fdp`, `sfdp`, `osage` & `patchwork`.\n> More information: <https://graphviz.org/doc/info/command.html>.\n\n- Render a PNG image with a filename based on the input filename and output format (uppercase -O):\n\n`dot -T {{png}} -O {{path/to/input.gv}}`\n\n- Render a SVG image with the specified output filename (lowercase -o):\n\n`dot -T {{svg}} -o {{path/to/image.svg}} {{path/to/input.gv}}`\n\n- Render the output in PS, PDF, SVG, Fig, PNG, GIF, JPEG, JSON, or DOT format:\n\n`dot -T {{format}} -O {{path/to/input.gv}}`\n\n- Render a GIF image using `stdin` and `stdout`:\n\n`echo "{{digraph {this -> that} }}" | dot -T {{gif}} > {{path/to/image.gif}}`\n\n- Display help:\n\n`dot -?`\n
dpkg
dpkg(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dpkg(1) Linux manual page NAME | SYNOPSIS | WARNING | DESCRIPTION | INFORMATION ABOUT PACKAGES | ACTIONS | OPTIONS | EXIT STATUS | ENVIRONMENT | FILES | SECURITY | BUGS | EXAMPLES | ADDITIONAL FUNCTIONALITY | SEE ALSO | AUTHORS | COLOPHON dpkg(1) dpkg suite dpkg(1) NAME top dpkg - package manager for Debian SYNOPSIS top dpkg [option...] action WARNING top This manual is intended for users wishing to understand dpkg's command line options and package states in more detail than that provided by dpkg --help. It should not be used by package maintainers wishing to understand how dpkg will install their packages. The descriptions of what dpkg does when installing and removing packages are particularly inadequate. DESCRIPTION top dpkg is a medium-level tool to install, build, remove and manage Debian packages. The primary and more user-friendly front-end for dpkg as a CLI (command-line interface) is apt(8) and as a TUI (terminal user interface) is aptitude(8). dpkg itself is controlled entirely via command line parameters, which consist of exactly one action and zero or more options. The action-parameter tells dpkg what to do and options control the behavior of the action in some way. dpkg can also be used as a front-end to dpkg-deb(1) and dpkg-query(1). The list of supported actions can be found later on in the ACTIONS section. If any such action is encountered dpkg just runs dpkg-deb or dpkg-query with the parameters given to it, but no specific options are currently passed to them, to use any such option the back-ends need to be called directly. INFORMATION ABOUT PACKAGES top dpkg maintains some usable information about available packages. The information is divided in three classes: states, selection states and flags. These values are intended to be changed mainly with dselect. Package states not-installed The package is not installed on your system. config-files Only the configuration files or the postrm script and the data it needs to remove of the package exist on the system. half-installed The installation of the package has been started, but not completed for some reason. unpacked The package is unpacked, but not configured. half-configured The package is unpacked and configuration has been started, but not yet completed for some reason. triggers-awaited The package awaits trigger processing by another package. triggers-pending The package has been triggered. installed The package is correctly unpacked and configured. Package selection states install The package is selected for installation. hold A package marked to be on hold is kept on the same version, that is, no automatic new installs, upgrades or removals will be performed on them, unless these actions are requested explicitly, or are permitted to be done automatically with the --force-hold option. deinstall The package is selected for deinstallation (i.e. we want to remove all files, except configuration files). purge The package is selected to be purged (i.e. we want to remove everything from system directories, even configuration files). unknown The package selection is unknown. A package that is also in a not-installed state, and with an ok flag will be forgotten in the next database store. Package flags ok A package marked ok is in a known state, but might need further processing. reinstreq A package marked reinstreq is broken and requires reinstallation. These packages cannot be removed, unless forced with option --force-remove-reinstreq. ACTIONS top -i, --install package-file... Install the package. If --recursive or -R option is specified, package-file must refer to a directory instead. Installation consists of the following steps: 1. Extract the control files of the new package. 2. If another version of the same package was installed before the new installation, execute prerm script of the old package. 3. Run preinst script, if provided by the package. 4. Unpack the new files, and at the same time back up the old files, so that if something goes wrong, they can be restored. 5. If another version of the same package was installed before the new installation, execute the postrm script of the old package. Note that this script is executed after the preinst script of the new package, because new files are written at the same time old files are removed. 6. Configure the package. See --configure for detailed information about how this is done. --unpack package-file... Unpack the package, but don't configure it. If --recursive or -R option is specified, package-file must refer to a directory instead. Will process triggers for Pre-Depends unless --no-triggers has been specified. --configure package...|-a|--pending Configure a package which has been unpacked but not yet configured. If -a or --pending is given instead of package, all unpacked but unconfigured packages are configured. To reconfigure a package which has already been configured, try the dpkg-reconfigure(8) command instead (which is part of the debconf project). Configuring consists of the following steps: 1. Unpack the conffiles, and at the same time back up the old conffiles, so that they can be restored if something goes wrong. 2. Run postinst script, if provided by the package. Will process triggers unless --no-triggers has been specified. --triggers-only package...|-a|--pending Processes only triggers (since dpkg 1.14.17). All pending triggers will be processed. If package names are supplied only those packages' triggers will be processed, exactly once each where necessary. Use of this option may leave packages in the improper triggers-awaited and triggers-pending states. This can be fixed later by running: dpkg --configure --pending. -r, --remove package...|-a|--pending Remove an installed package. This removes everything except conffiles and other data cleaned up by the postrm script, which may avoid having to reconfigure the package if it is reinstalled later (conffiles are configuration files that are listed in the DEBIAN/conffiles control file). If there is no DEBIAN/conffiles control file nor DEBIAN/postrm script, this command is equivalent to calling --purge. If -a or --pending is given instead of a package name, then all packages unpacked, but marked to be removed in file /usr/local/var/lib/dpkg/status, are removed. Removing of a package consists of the following steps: 1. Run prerm script. 2. Remove the installed files. 3. Run postrm script. Will process triggers unless --no-triggers has been specified. -P, --purge package...|-a|--pending Purge an installed or already removed package. This removes everything, including conffiles, and anything else cleaned up from postrm. If -a or --pending is given instead of a package name, then all packages unpacked or removed, but marked to be purged in file /usr/local/var/lib/dpkg/status, are purged. Note: Some configuration files might be unknown to dpkg because they are created and handled separately through the configuration scripts. In that case, dpkg won't remove them by itself, but the package's postrm script (which is called by dpkg), has to take care of their removal during purge. Of course, this only applies to files in system directories, not configuration files written to individual users' home directories. Purging of a package consists of the following steps: 1. Remove the package, if not already removed. See --remove for detailed information about how this is done. 2. Run postrm script. Will process triggers unless --no-triggers has been specified. -V, --verify [package-name...] Verifies the integrity of package-name or all packages if omitted, by comparing information from the files installed by a package with the files metadata information stored in the dpkg database (since dpkg 1.17.2). The origin of the files metadata information in the database is the binary packages themselves. That metadata gets collected at package unpack time during the installation process. Currently the only functional check performed is an md5sum verification of the file contents against the stored value in the files database. It will only get checked if the database contains the file md5sum. To check for any missing metadata in the database, the --audit command can be used. This is only an integrity check and should not be considered as any kind of security verification. The output format is selectable with the --verify-format option, which by default uses the rpm format, but that might change in the future, and as such, programs parsing this command output should be explicit about the format they expect. -C, --audit [package-name...] Performs database sanity and consistency checks for package- name or all packages if omitted (per package checks since dpkg 1.17.10). For example, searches for packages that have been installed only partially on your system or that have missing, wrong or obsolete control data or files. dpkg will suggest what to do with them to get them fixed. --update-avail [Packages-file] --merge-avail [Packages-file] Update dpkg's and dselect's idea of which packages are available. With action --merge-avail, old information is combined with information from Packages-file. With action --update-avail, old information is replaced with the information in the Packages-file. The Packages-file distributed with Debian is simply named Packages. If the Packages-file argument is missing or named - then it will be read from standard input (since dpkg 1.17.7). dpkg keeps its record of available packages in /usr/local/var/lib/dpkg/available. A simpler one-shot command to retrieve and update the available file is dselect update. Note that this file is mostly useless if you don't use dselect but an APT-based frontend: APT has its own system to keep track of available packages. -A, --record-avail package-file... Update dpkg and dselect's idea of which packages are available with information from the package package-file. If --recursive or -R option is specified, package-file must refer to a directory instead. --forget-old-unavail Now obsolete and a no-op as dpkg will automatically forget uninstalled unavailable packages (since dpkg 1.15.4), but only those that do not contain user information such as package selections. --clear-avail Erase the existing information about what packages are available. --get-selections [package-name-pattern...] Get list of package selections, and write it to stdout. Without a pattern, non-installed packages (i.e. those which have been previously purged) will not be shown. --set-selections Set package selections using file read from stdin. This file should be in the format package state, where state is one of install, hold, deinstall or purge. Blank lines and comment lines beginning with # are also permitted. The available file needs to be up-to-date for this command to be useful, otherwise unknown packages will be ignored with a warning. See the --update-avail and --merge-avail commands for more information. --clear-selections Set the requested state of every non-essential package to deinstall (since dpkg 1.13.18). This is intended to be used immediately before --set-selections, to deinstall any packages not in list given to --set-selections. --yet-to-unpack Searches for packages selected for installation, but which for some reason still haven't been installed. Note: This command makes use of both the available file and the package selections. --predep-package Print a single package which is the target of one or more relevant pre-dependencies and has itself no unsatisfied pre- dependencies. If such a package is present, output it as a Packages file entry, which can be massaged as appropriate. Note: This command makes use of both the available file and the package selections. Returns 0 when a package is printed, 1 when no suitable package is available and 2 on error. --add-architecture architecture Add architecture to the list of architectures for which packages can be installed without using --force-architecture (since dpkg 1.16.2). The architecture dpkg is built for (i.e. the output of --print-architecture) is always part of that list. --remove-architecture architecture Remove architecture from the list of architectures for which packages can be installed without using --force-architecture (since dpkg 1.16.2). If the architecture is currently in use in the database then the operation will be refused, except if --force-architecture is specified. The architecture dpkg is built for (i.e. the output of --print-architecture) can never be removed from that list. --print-architecture Print architecture of packages dpkg installs (for example, i386). --print-foreign-architectures Print a newline-separated list of the extra architectures dpkg is configured to allow packages to be installed for (since dpkg 1.16.2). --assert-help Give help about the --assert-feature options (since dpkg 1.21.0). --assert-feature Asserts that dpkg supports the requested feature. Returns 0 if the feature is fully supported, 1 if the feature is known but dpkg cannot provide support for it yet, and 2 if the feature is unknown. The current list of assertable features is: support-predepends Supports the Pre-Depends field (since dpkg 1.1.0). working-epoch Supports epochs in version strings (since dpkg 1.4.0.7). long-filenames Supports long filenames in deb(5) archives (since dpkg 1.4.1.17). multi-conrep Supports multiple Conflicts and Replaces (since dpkg 1.4.1.19). multi-arch Supports multi-arch fields and semantics (since dpkg 1.16.2). versioned-provides Supports versioned Provides (since dpkg 1.17.11). protected-field Supports the Protected field (since dpkg 1.20.1). --validate-thing string Validate that the thing string has a correct syntax (since dpkg 1.18.16). Returns 0 if the string is valid, 1 if the string is invalid but might be accepted in lax contexts, and 2 if the string is invalid. The current list of validatable things is: pkgname Validates the given package name (since dpkg 1.18.16). trigname Validates the given trigger name (since dpkg 1.18.16). archname Validates the given architecture name (since dpkg 1.18.16). version Validates the given version (since dpkg 1.18.16). --compare-versions ver1 op ver2 Compare version numbers, where op is a binary operator. dpkg returns true (0) if the specified condition is satisfied, and false (1) otherwise. There are two groups of operators, which differ in how they treat an empty ver1 or ver2. These treat an empty version as earlier than any version: lt le eq ne ge gt. These treat an empty version as later than any version: lt-nl le-nl ge-nl gt-nl. These are provided only for compatibility with control file syntax: < << <= = >= >> >. The < and > operators are obsolete and should not be used, due to confusing semantics. To illustrate: 0.1 < 0.1 evaluates to true. -?, --help Display a brief help message. --force-help Give help about the --force-thing options. -Dh, --debug=help Give help about debugging options. --version Display dpkg version information. When used with --robot, the output will be the program version number in a dotted numerical format, with no newline. dpkg-deb actions See dpkg-deb(1) for more information about the following actions, and other actions and options not exposed by the dpkg front-end. -b, --build directory [archive|directory] Build a deb package. -c, --contents archive List contents of a deb package. -e, --control archive [directory] Extract control-information from a package. -x, --extract archive directory Extract the files contained by package. -X, --vextract archive directory Extract and display the filenames contained by a package. -f, --field archive [control-field...] Display control field(s) of a package. --ctrl-tarfile archive Output the control tar-file contained in a Debian package. --fsys-tarfile archive Output the filesystem tar-file contained by a Debian package. -I, --info archive [control-file...] Show information about a package. dpkg-query actions See dpkg-query(1) for more information about the following actions, and other actions and options not exposed by the dpkg front-end. -l, --list package-name-pattern... List packages matching given pattern. -s, --status package-name... Report status of specified package. -L, --listfiles package-name... List files installed to your system from package-name. -S, --search filename-search-pattern... Search for a filename from installed packages. -p, --print-avail package-name... Display details about package-name, as found in /usr/local/var/lib/dpkg/available. Users of APT-based frontends should use apt show package-name instead. OPTIONS top All options can be specified both on the command line and in the dpkg configuration file /usr/local/etc/dpkg/dpkg.cfg or fragment files (with names matching this shell pattern '[0-9a-zA-Z_-]*') on the configuration directory /usr/local/etc/dpkg/dpkg.cfg.d/. Each line in the configuration file is either an option (exactly the same as the command line option but without leading hyphens) or a comment (if it starts with a #). --abort-after=number Change after how many errors dpkg will abort. The default is 50. -B, --auto-deconfigure When a package is removed, there is a possibility that another installed package depended on the removed package. Specifying this option will cause automatic deconfiguration of the package which depended on the removed package. -Doctal, --debug=octal Switch debugging on. octal is formed by bitwise-ORing desired values together from the list below (note that these values may change in future releases). -Dh or --debug=help display these debugging values. Number Description 1 Generally helpful progress information 2 Invocation and status of maintainer scripts 10 Output for each file processed 100 Lots of output for each file processed 20 Output for each configuration file 200 Lots of output for each configuration file 40 Dependencies and conflicts 400 Lots of dependencies/conflicts output 10000 Trigger activation and processing 20000 Lots of output regarding triggers 40000 Silly amounts of output regarding triggers 1000 Lots of drivel about for example the dpkg/info dir 2000 Insane amounts of drivel --force-things --no-force-things, --refuse-things Force or refuse (no-force and refuse mean the same thing) to do some things. things is a comma separated list of things specified below. --force-help displays a message describing them. Things marked with (*) are forced by default. Warning: These options are mostly intended to be used by experts only. Using them without fully understanding their effects may break your whole system. all: Turns on (or off) all force options. downgrade(*): Install a package, even if newer version of it is already installed. Warning: At present dpkg does not do any dependency checking on downgrades and therefore will not warn you if the downgrade breaks the dependency of some other package. This can have serious side effects, downgrading essential system components can even make your whole system unusable. Use with care. configure-any: Configure also any unpacked but unconfigured packages on which the current package depends. hold: Allow automatic installs, upgrades or removals of packages even when marked to be on hold. Note: When these actions are requested explicitly, the hold package selection state always gets ignored. remove-reinstreq: Remove a package, even if it's broken and marked to require reinstallation. This may, for example, cause parts of the package to remain on the system, which will then be forgotten by dpkg. remove-protected: Remove, even if the package is considered protected (since dpkg 1.20.1). Protected packages contain mostly important system boot infrastructure or are used for custom system- local meta-packages. Removing them might cause the whole system to be unable to boot or lose required functionality to operate, so use with caution. remove-essential: Remove, even if the package is considered essential. Essential packages contain mostly very basic Unix commands, required for the packaging system, for the operation of the system in general or during boot (although the latter should be converted to protected packages instead). Removing them might cause the whole system to stop working, so use with caution. depends: Turn all dependency problems into warnings. This affects the Pre-Depends and Depends fields. depends-version: Don't care about versions when checking dependencies. This affects the Pre-Depends and Depends fields. breaks: Install, even if this would break another package (since dpkg 1.14.6). This affects the Breaks field. conflicts: Install, even if it conflicts with another package. This is dangerous, for it will usually cause overwriting of some files. This affects the Conflicts field. confmiss: Always install the missing conffile without prompting. This is dangerous, since it means not preserving a change (removing) made to the file. confnew: If a conffile has been modified and the version in the package did change, always install the new version without prompting, unless the --force-confdef is also specified, in which case the default action is preferred. confold: If a conffile has been modified and the version in the package did change, always keep the old version without prompting, unless the --force-confdef is also specified, in which case the default action is preferred. confdef: If a conffile has been modified and the version in the package did change, always choose the default action without prompting. If there is no default action it will stop to ask the user unless --force-confnew or --force-confold is also given, in which case it will use that to decide the final action. confask: If a conffile has been modified always offer to replace it with the version in the package, even if the version in the package did not change (since dpkg 1.15.8). If any of --force-confnew, --force-confold, or --force-confdef is also given, it will be used to decide the final action. overwrite: Overwrite one package's file with another's file. overwrite-dir: Overwrite one package's directory with another's file. overwrite-diverted: Overwrite a diverted file with an undiverted version. statoverride-add: Overwrite an existing stat override when adding it (since dpkg 1.19.5). statoverride-remove: Ignore a missing stat override when removing it (since dpkg 1.19.5). security-mac(*): Use platform-specific Mandatory Access Controls (MAC) based security when installing files into the filesystem (since dpkg 1.19.5). On Linux systems the implementation uses SELinux. unsafe-io: Do not perform safe I/O operations when unpacking (since dpkg 1.15.8.6). Currently this implies not performing file system syncs before file renames, which is known to cause substantial performance degradation on some file systems, unfortunately the ones that require the safe I/O on the first place due to their unreliable behaviour causing zero- length files on abrupt system crashes. Note: For ext4, the main offender, consider using instead the mount option nodelalloc, which will fix both the performance degradation and the data safety issues, the latter by making the file system not produce zero-length files on abrupt system crashes with any software not doing syncs before atomic renames. Warning: Using this option might improve performance at the cost of losing data, use with care. script-chrootless: Run maintainer scripts without chroot(2)ing into instdir even if the package does not support this mode of operation (since dpkg 1.18.5). Warning: This can destroy your host system, use with extreme care. architecture: Process even packages with wrong or no architecture. bad-version: Process even packages with wrong versions (since dpkg 1.16.1). bad-path: PATH is missing important programs, so problems are likely. not-root: Try to (de)install things even when not root. bad-verify: Install a package even if it fails authenticity check. --ignore-depends=package,... Ignore dependency-checking for specified packages (actually, checking is performed, but only warnings about conflicts are given, nothing else). This affects the Pre-Depends, Depends and Breaks fields. --no-act, --dry-run, --simulate Do everything which is supposed to be done, but don't write any changes. This is used to see what would happen with the specified action, without actually modifying anything. Be sure to give --no-act before the action-parameter, or you might end up with undesirable results. (e.g. dpkg --purge foo --no-act will first purge package foo and then try to purge package --no-act, even though you probably expected it to actually do nothing). -R, --recursive Recursively handle all regular files matching pattern *.deb found at specified directories and all of its subdirectories. This can be used with -i, -A, --install, --unpack and --record-avail actions. -G Don't install a package if a newer version of the same package is already installed. This is an alias of --refuse-downgrade. --admindir=dir Set the administrative directory to directory. This directory contains many files that give information about status of installed or uninstalled packages, etc. Defaults to /usr/local/var/lib/dpkg if DPKG_ADMINDIR has not been set. --instdir=dir Set the installation directory, which refers to the directory where packages are to be installed. instdir is also the directory passed to chroot(2) before running package's installation scripts, which means that the scripts see instdir as a root directory. Defaults to /. --root=dir Set the root directory to directory, which sets the installation directory to dir and the administrative directory to dir/usr/local/var/lib/dpkg. -O, --selected-only Only process the packages that are selected for installation. The actual marking is done with dselect or by dpkg, when it handles packages. For example, when a package is removed, it will be marked selected for deinstallation. -E, --skip-same-version Don't install the package if the same version and architecture of the package is already installed. Since dpkg 1.21.10, the architecture is also taken into account, which makes it possible to cross-grade packages or install additional co-installable instances with the same version, but different architecture. --pre-invoke=command --post-invoke=command Set an invoke hook command to be run via sh -c before or after the dpkg run for the unpack, configure, install, triggers-only, remove and purge actions (since dpkg 1.15.4), and add-architecture and remove-architecture actions (since dpkg 1.17.19). This option can be specified multiple times. The order the options are specified is preserved, with the ones from the configuration files taking precedence. The environment variable DPKG_HOOK_ACTION is set for the hooks to the current dpkg action. Note: Front-ends might call dpkg several times per invocation, which might run the hooks more times than expected. --path-exclude=glob-pattern --path-include=glob-pattern Set glob-pattern as a path filter, either by excluding or re- including previously excluded paths matching the specified patterns during install (since dpkg 1.15.8). Warning: Take into account that depending on the excluded paths you might completely break your system, use with caution. The glob patterns use the same wildcards used in the shell, were * matches any sequence of characters, including the empty string and also /. For example, /usr/*/READ* matches /usr/share/doc/package/README. As usual, ? matches any single character (again, including /). And [ starts a character class, which can contain a list of characters, ranges and complementations. See glob(7) for detailed information about globbing. Note: The current implementation might re-include more directories and symlinks than needed, in particular when there is a more specific re- inclusion, to be on the safe side and avoid possible unpack failures; future work might fix this. This can be used to remove all paths except some particular ones; a typical case is: --path-exclude=/usr/share/doc/* --path-include=/usr/share/doc/*/copyright to remove all documentation files except the copyright files. These two options can be specified multiple times, and interleaved with each other. Both are processed in the given order, with the last rule that matches a file name making the decision. The filters are applied when unpacking the binary packages, and as such only have knowledge of the type of object currently being filtered (e.g. a normal file or a directory) and have not visibility of what objects will come next. Because these filters have side effects (in contrast to find(1) filters), excluding an exact pathname that happens to be a directory object like /usr/share/doc will not have the desired result, and only that pathname will be excluded (which could be automatically reincluded if the code sees the need). Any subsequent files contained within that directory will fail to unpack. Hint: make sure the globs are not expanded by your shell. --verify-format format-name Sets the output format for the --verify command (since dpkg 1.17.2). The only currently supported output format is rpm, which consists of a line for every path that failed any check. These lines have the following format: missing [c] pathname [(error-message)] ??5?????? [c] pathname The first 9 characters are used to report the checks result, either a literal missing when the file is not present or its metadata cannot be fetched, or one of the following special characters that report the result for each check: ? Implies the check could not be done (lack of support, file permissions, etc). . Implies the check passed. A-Za-z0-9 Implies a specific check failed. The following positions and alphanumeric characters are currently supported: 1 ? These checks are currently not supported, will always be ?. 2 M The file mode check failed (since dpkg 1.21.0). Because pathname metadata is currently not tracked, this check can only be partially emulated via a very simple heuristic for pathnames that have a known digest, which implies they should be regular files, where the check will fail if the pathname is not a regular file on the filesystem. This check will currently never succeed as it does not have enough information available. 3 5 The digest check failed, which means the file contents have changed. This is only an integrity check and should not be considered as any kind of security verification. 4-9 ? These checks are currently not supported, will always be ?. The line is followed by a space and an attribute character. The following attribute character is supported: c The pathname is a conffile. Finally followed by another space and the pathname. In case the entry was of the missing type, and the file was not actually present on the filesystem, then the line is followed by a space and the error message enclosed within parenthesis. --status-fd n Send machine-readable package status and progress information to file descriptor n. This option can be specified multiple times. The information is generally one record per line, in one of the following forms: status: package: status Package status changed; status is as in the status file. status: package : error : extended-error-message An error occurred. Any possible newlines in extended- error-message will be converted to spaces before output. status: file : conffile-prompt : 'real-old' 'real-new' useredited distedited User is being asked a conffile question. processing: stage: package Sent just before a processing stage starts. stage is one of upgrade, install (both sent before unpacking), configure, trigproc, disappear, remove, purge. --status-logger=command Send machine-readable package status and progress information to the shell command's standard input, to be run via sh -c (since dpkg 1.16.0). This option can be specified multiple times. The output format used is the same as in --status-fd. --log=filename Log status change updates and actions to filename, instead of the default /usr/local/var/log/dpkg.log. If this option is given multiple times, the last filename is used. Log messages are of the form: YYYY-MM-DD HH:MM:SS startup type command For each dpkg invocation where type is archives (with a command of unpack or install) or packages (with a command of configure, triggers-only, remove or purge). YYYY-MM-DD HH:MM:SS status state pkg installed-version For status change updates. YYYY-MM-DD HH:MM:SS action pkg installed-version available- version For actions where action is one of install, upgrade, configure, trigproc, disappear, remove or purge. YYYY-MM-DD HH:MM:SS conffile filename decision For conffile changes where decision is either install or keep. --robot Use a machine-readable output format. This provides an interface for programs that need to parse the output of some of the commands that do not otherwise emit a machine-readable output format. No localization will be used, and the output will be modified to make it easier to parse. The only currently supported command is --version. --no-pager Disables the use of any pager when showing information (since dpkg 1.19.2). --no-debsig Do not try to verify package signatures. --no-triggers Do not run any triggers in this run (since dpkg 1.14.17), but activations will still be recorded. If used with --configure package or --triggers-only package then the named package postinst will still be run even if only a triggers run is needed. Use of this option may leave packages in the improper triggers-awaited and triggers-pending states. This can be fixed later by running: dpkg --configure --pending. --triggers Cancels a previous --no-triggers (since dpkg 1.14.17). EXIT STATUS top 0 The requested action was successfully performed. Or a check or assertion command returned true. 1 A check or assertion command returned false. 2 Fatal or unrecoverable error due to invalid command-line usage, or interactions with the system, such as accesses to the database, memory allocations, etc. ENVIRONMENT top External environment PATH This variable is expected to be defined in the environment and point to the system paths where several required programs are to be found. If it's not set or the programs are not found, dpkg will abort. HOME If set, dpkg will use it as the directory from which to read the user specific configuration file. TMPDIR If set, dpkg will use it as the directory in which to create temporary files and directories. SHELL The program dpkg will execute when starting a new interactive shell, or when spawning a command via a shell. PAGER DPKG_PAGER The program dpkg will execute when running a pager, which will be executed with $SHELL -c, for example when displaying the conffile differences. If SHELL is not set, sh will be used instead. The DPKG_PAGER overrides the PAGER environment variable (since dpkg 1.19.2). DPKG_COLORS Sets the color mode (since dpkg 1.18.5). The currently accepted values are: auto (default), always and never. DPKG_DEBUG Sets the debug mask (since dpkg 1.21.10) from an octal value. The currently accepted flags are described in the --debug option. DPKG_FORCE Sets the force flags (since dpkg 1.19.5). When this variable is present, no built-in force defaults will be applied. If the variable is present but empty, all force flags will be disabled. DPKG_ADMINDIR If set and the --admindir or --root options have not been specified, it will be used as the dpkg administrative directory (since dpkg 1.20.0). DPKG_FRONTEND_LOCKED Set by a package manager frontend to notify dpkg that it should not acquire the frontend lock (since dpkg 1.19.1). Internal environment LESS Defined by dpkg to -FRSXMQ, if not already set, when spawning a pager (since dpkg 1.19.2). To change the default behavior, this variable can be preset to some other value including an empty string, or the PAGER or DPKG_PAGER variables can be set to disable specific options with -+, for example DPKG_PAGER="less -+F". DPKG_ROOT Defined by dpkg on the maintainer script environment to indicate which installation to act on (since dpkg 1.18.5). The value is intended to be prepended to any path maintainer scripts operate on. During normal operation, this variable is empty. When installing packages into a different instdir, dpkg normally invokes maintainer scripts using chroot(2) and leaves this variable empty, but if --force-script-chrootless is specified then the chroot(2) call is skipped and instdir is non-empty. DPKG_ADMINDIR Defined by dpkg on the maintainer script environment to indicate the dpkg administrative directory to use (since dpkg 1.16.0). This variable is always set to the current --admindir value. DPKG_FORCE Defined by dpkg on the subprocesses environment to all the currently enabled force option names separated by commas (since dpkg 1.19.5). DPKG_SHELL_REASON Defined by dpkg on the shell spawned on the conffile prompt to examine the situation (since dpkg 1.15.6). Current valid value: conffile-prompt. DPKG_CONFFILE_OLD Defined by dpkg on the shell spawned on the conffile prompt to examine the situation (since dpkg 1.15.6). Contains the path to the old conffile. DPKG_CONFFILE_NEW Defined by dpkg on the shell spawned on the conffile prompt to examine the situation (since dpkg 1.15.6). Contains the path to the new conffile. DPKG_HOOK_ACTION Defined by dpkg on the shell spawned when executing a hook action (since dpkg 1.15.4). Contains the current dpkg action. DPKG_RUNNING_VERSION Defined by dpkg on the maintainer script environment to the version of the currently running dpkg instance (since dpkg 1.14.17). DPKG_MAINTSCRIPT_PACKAGE Defined by dpkg on the maintainer script environment to the (non-arch-qualified) package name being handled (since dpkg 1.14.17). DPKG_MAINTSCRIPT_PACKAGE_REFCOUNT Defined by dpkg on the maintainer script environment to the package reference count, i.e. the number of package instances with a state greater than not-installed (since dpkg 1.17.2). DPKG_MAINTSCRIPT_ARCH Defined by dpkg on the maintainer script environment to the architecture the package got built for (since dpkg 1.15.4). DPKG_MAINTSCRIPT_NAME Defined by dpkg on the maintainer script environment to the name of the script running, one of preinst, postinst, prerm or postrm (since dpkg 1.15.7). DPKG_MAINTSCRIPT_DEBUG Defined by dpkg on the maintainer script environment to a value (0 or 1) noting whether debugging has been requested (with the --debug option) for the maintainer scripts (since dpkg 1.18.4). FILES top /usr/local/etc/dpkg/dpkg.cfg.d/[0-9a-zA-Z_-]* Configuration fragment files (since dpkg 1.15.4). /usr/local/etc/dpkg/dpkg.cfg Configuration file with default options. /usr/local/var/log/dpkg.log Default log file (see /usr/local/etc/dpkg/dpkg.cfg and option --log). The other files listed below are in their default directories, see option --admindir to see how to change locations of these files. /usr/local/var/lib/dpkg/available List of available packages. /usr/local/var/lib/dpkg/status Statuses of available packages. This file contains information about whether a package is marked for removing or not, whether it is installed or not, etc. See section "INFORMATION ABOUT PACKAGES" for more info. The status file is backed up daily in /usr/local/var/backups. It can be useful if it's lost or corrupted due to filesystems troubles. The format and contents of a binary package are described in deb(5). Filesystem filenames During unpacking and configuration dpkg uses various filenames for backup and rollback purposes. The following is a simplified explanation of how these filenames get used during package installation. *.dpkg-new During unpack, dpkg extracts new filesystem objects into pathname.dpkg-new (except for existing directories or symlinks to directories which get skipped), once that is done and after having performed backups of the old objects, the objects get renamed to pathname. *.dpkg-tmp During unpack, dpkg makes backups of the old filesystem objects into pathname.dpkg-tmp after extracting the new objects. These backups are performed as either a rename for directories (but only if they switch file type), a new symlink copy for symlinks, or a hard link for any other filesystem object, except for conffiles which get no backups because they are processed at a later stage. In case of needing to rollback, these backups get used to restore the previous contents of the objects. These get removed automatically after the installation is complete. *.dpkg-old During configuration, when installing a new version, dpkg can make a backup of the previous modified conffile into pathname.dpkg-old. *.dpkg-dist During configuration, when keeping the old version, dpkg can make a backup of the new unmodified conffile into pathname.dpkg-dist. SECURITY top Any operation that needs write access to the database or the filesystem is considered a privileged operation that might allow root escalation. These operations must never be delegated to an untrusted user or be done on untrusted packages, as that might allow root access to the system. Some operations (such as package verification) might need root privileges to be able to access files on the filesystem that would otherwise be inaccessible due to restricted permissions, but should otherwise work normally and produce appropriate messages in those cases. Query operations should never require root, and delegating their execution to unprivileged users via some gain-root command can have security implications (such as privilege escalation), for example when a pager is automatically invoked by the tool. See also the SECURITY section of the dpkg-deb(1) and dpkg-split(1) manual pages. BUGS top --no-act usually gives less information than might be helpful. EXAMPLES top To list installed packages related to the editor vi(1) (note that dpkg-query does not load the available file anymore by default, and the dpkg-query --load-avail option should be used instead for that): dpkg -l '*vi*' To see the entries in /usr/local/var/lib/dpkg/available of two packages: dpkg --print-avail vim neovim | less To search the listing of packages yourself: dpkg --print-avail | less To remove an installed neovim package: dpkg -r neovim To install a package, you first need to find it in an archive or media disc. When using an archive based on a pool structure, knowing the archive area and the name of the package is enough to infer the pathname: dpkg -i /media/bdrom/pool/main/v/vim/vim_9.0.2018-1_amd64.deb To make a local copy of the package selection states: dpkg --get-selections >myselections You might transfer this file to another computer, and after having updated the available file there with your package manager frontend of choice (see <https://wiki.debian.org/Teams/Dpkg/FAQ#set-selections> for more details), for example: apt-cache dumpavail | dpkg --merge-avail you can install it with: dpkg --clear-selections dpkg --set-selections <myselections Note that this will not actually install or remove anything, but just set the selection state on the requested packages. You will need some other application to actually download and install the requested packages. For example, run apt-get dselect-upgrade. Ordinarily, you will find that dselect(1) provides a more convenient way to modify the package selection states. ADDITIONAL FUNCTIONALITY top Additional functionality can be gained by installing any of the following packages: apt, aptitude and debsig-verify. SEE ALSO top aptitude(8), apt(8), dselect(1), dpkg-deb(1), dpkg-query(1), deb(5), deb-control(5), dpkg.cfg(5), and dpkg-reconfigure(8). AUTHORS top See /usr/local/share/doc/dpkg/THANKS for the list of people who have contributed to dpkg. COLOPHON top This page is part of the dpkg (Debian Package Manager) project. Information about the project can be found at https://wiki.debian.org/Teams/Dpkg/. If you have a bug report for this manual page, see http://bugs.debian.org/cgi-bin/pkgreport.cgi?src=dpkg. This page was obtained from the project's upstream Git repository git clone https://git.dpkg.org/git/dpkg/dpkg.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-18.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 1.22.1-2-gddb42 2023-10-30 dpkg(1) Pages that refer to this page: dpkg-architecture(1), dpkg-deb(1), dpkg-divert(1), dpkg-name(1), dpkg-query(1), dpkg-realpath(1), dpkg-scanpackages(1), dpkg-split(1), dpkg-statoverride(1), dpkg-trigger(1), dselect(1), systemd-analyze(1), deb-conffiles(5), deb-control(5), deb-md5sums(5), deb-postinst(5), deb-postrm(5), deb-preinst(5), deb-prerm(5), deb-substvars(5), deb-triggers(5), dpkg.cfg(5), deb-version(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# dpkg\n\n> Debian package manager.\n> Some subcommands such as `dpkg deb` have their own usage documentation.\n> For equivalent commands in other package managers, see <https://wiki.archlinux.org/title/Pacman/Rosetta>.\n> More information: <https://manpages.debian.org/latest/dpkg/dpkg.html>.\n\n- Install a package:\n\n`dpkg -i {{path/to/file.deb}}`\n\n- Remove a package:\n\n`dpkg -r {{package}}`\n\n- List installed packages:\n\n`dpkg -l {{pattern}}`\n\n- List a package's contents:\n\n`dpkg -L {{package}}`\n\n- List contents of a local package file:\n\n`dpkg -c {{path/to/file.deb}}`\n\n- Find out which package owns a file:\n\n`dpkg -S {{path/to/file}}`\n
dpkg-deb
dpkg-deb(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dpkg-deb(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMANDS | OPTIONS | EXIT STATUS | ENVIRONMENT | NOTES | SECURITY | BUGS | SEE ALSO | COLOPHON dpkg-deb(1) dpkg suite dpkg-deb(1) NAME top dpkg-deb - Debian package archive (.deb) manipulation tool SYNOPSIS top dpkg-deb [option...] command DESCRIPTION top dpkg-deb packs, unpacks and provides information about Debian archives. Use dpkg to install and remove packages from your system. You can also invoke dpkg-deb by calling dpkg with whatever options you want to pass to dpkg-deb. dpkg will spot that you wanted dpkg-deb and run it for you. For most commands taking an input archive argument, the archive can be read from standard input if the archive name is given as a single minus character (-); otherwise lack of support will be documented in their respective command description. COMMANDS top -b, --build binary-directory [archive|directory] Creates a debian archive from the filesystem tree stored in binary-directory. binary-directory must have a DEBIAN subdirectory, which contains the control information files such as the control file itself. This directory will not appear in the binary package's filesystem archive, but instead the files in it will be put in the binary package's control information area. Unless you specify --nocheck, dpkg-deb will read DEBIAN/control and parse it. It will check the file for syntax errors and other problems, and display the name of the binary package being built. dpkg-deb will also check the permissions of the maintainer scripts and other files found in the DEBIAN control information directory. If no archive is specified then dpkg-deb will write the package into the file binary-directory.deb. If the archive to be created already exists it will be overwritten. If the second argument is a directory then dpkg-deb will write to the file directory/package_version_arch.deb. When a target directory is specified, rather than a file, the --nocheck option may not be used (since dpkg-deb needs to read and parse the package control file to determine which filename to use). -I, --info archive [control-file-name...] Provides information about a binary package archive. If no control-file-names are specified then it will print a summary of the contents of the package as well as its control file. If any control-file-names are specified then dpkg-deb will print them in the order they were specified; if any of the components weren't present it will print an error message to stderr about each one and exit with status 2. -W, --show archive Provides information about a binary package archive in the format specified by the --showformat argument. The default format displays the package's name and version on one line, separated by a tabulator. -f, --field archive [control-field-name...] Extracts control file information from a binary package archive. If no control-field-names are specified then it will print the whole control file. If any are specified then dpkg-deb will print their contents, in the order in which they appear in the control file. If more than one control-field-name is specified then dpkg-deb will precede each with its field name (and a colon and space). No errors are reported for fields requested but not found. -c, --contents archive Lists the contents of the filesystem tree archive portion of the package archive. It is currently produced in the format generated by tar's verbose listing. -x, --extract archive directory Extracts the filesystem tree from a package archive into the specified directory. Note that extracting a package to the root directory will not result in a correct installation! Use dpkg to install packages. directory (but not its parents) will be created if necessary, and its permissions modified to match the contents of the package. -X, --vextract archive directory Is like --extract (-x) with --verbose (-v) which prints a listing of the files extracted as it goes. -R, --raw-extract archive directory Extracts the filesystem tree from a package archive into a specified directory, and the control information files into a DEBIAN subdirectory of the specified directory (since dpkg 1.16.1). The target directory (but not its parents) will be created if necessary. The input archive is not (currently) processed sequentially, so reading it from standard input (-) is not supported. --ctrl-tarfile archive Extracts the control data from a binary package and sends it to standard output in tar format (since dpkg 1.17.14). Together with tar(1) this can be used to extract a particular control file from a package archive. The input archive will always be processed sequentially. --fsys-tarfile archive Extracts the filesystem tree data from a binary package and sends it to standard output in tar format. Together with tar(1) this can be used to extract a particular file from a package archive. The input archive will always be processed sequentially. -e, --control archive [directory] Extracts the control information files from a package archive into the specified directory. If no directory is specified then a subdirectory DEBIAN in the current directory is used. The target directory (but not its parents) will be created if necessary. -?, --help Show the usage message and exit. --version Show the version and exit. OPTIONS top --showformat=format This option is used to specify the format of the output --show will produce. The format is a string that will be output for each package listed. The string may reference any status field using the ${field- name} form, a list of the valid fields can be easily produced using -I on the same package. A complete explanation of the formatting options (including escape sequences and field tabbing) can be found in the explanation of the --showformat option in dpkg-query(1). The default for this field is ${Package}\t${Version}\n. -zcompress-level Specify which compression level to use on the compressor backend, when building a package (default is 9 for gzip, 6 for xz, 3 for zstd). The accepted values are compressor specific. For gzip, from 0-9 with 0 being mapped to compressor none. For xz from 0-9. For zstd from 0-22, with levels from 20 to 22 enabling its ultra mode. Before dpkg 1.16.2 level 0 was equivalent to compressor none for all compressors. -Scompress-strategy Specify which compression strategy to use on the compressor backend, when building a package (since dpkg 1.16.2). Allowed values are none (since dpkg 1.16.4), filtered, huffman, rle and fixed for gzip (since dpkg 1.17.0) and extreme for xz. -Zcompress-type Specify which compression type to use when building a package. Allowed values are gzip, xz (since dpkg 1.15.6), zstd (since dpkg 1.21.18) and none (default is xz). --[no-]uniform-compression Specify that the same compression parameters should be used for all archive members (i.e. control.tar and data.tar; since dpkg 1.17.6). Otherwise only the data.tar member will use those parameters. The only supported compression types allowed to be uniformly used are none, gzip, xz and zstd. The --no-uniform-compression option disables uniform compression (since dpkg 1.19.0). Uniform compression is the default (since dpkg 1.19.0). --threads-max=threads Sets the maximum number of threads allowed for compressors that support multi-threaded operations (since dpkg 1.21.9). --root-owner-group Set the owner and group for each entry in the filesystem tree data to root with id 0 (since dpkg 1.19.0). Note: This option can be useful for rootless builds (see rootless-builds.txt), but should not be used when the entries have an owner or group that is not root. Support for these will be added later in the form of a meta manifest. --deb-format=format Set the archive format version used when building (since dpkg 1.17.0). Allowed values are 2.0 for the new format, and 0.939000 for the old one (default is 2.0). The old archive format is less easily parsed by non-Debian tools and is now obsolete; its only use is when building packages to be parsed by versions of dpkg older than 0.93.76 (September 1995), which was released as i386 a.out only. --nocheck Inhibits dpkg-deb --build's usual checks on the proposed contents of an archive. You can build any archive you want, no matter how broken, this way. -v, --verbose Enables verbose output (since dpkg 1.16.1). This currently only affects --extract making it behave like --vextract. -D, --debug Enables debugging output. This is not very interesting. EXIT STATUS top 0 The requested action was successfully performed. 2 Fatal or unrecoverable error due to invalid command-line usage, or interactions with the system, such as accesses to the database, memory allocations, etc. ENVIRONMENT top DPKG_DEB_THREADS_MAX Sets the maximum number of threads allowed for compressors that support multi-threaded operations (since dpkg 1.21.9). The --threads-max option overrides this value. DPKG_DEB_COMPRESSOR_TYPE Sets the compressor type to use (since dpkg 1.21.10). The -Z option overrides this value. DPKG_DEB_COMPRESSOR_LEVEL Sets the compressor level to use (since dpkg 1.21.10). The -z option overrides this value. DPKG_COLORS Sets the color mode (since dpkg 1.18.5). The currently accepted values are: auto (default), always and never. TMPDIR If set, dpkg-deb will use it as the directory in which to create temporary files and directories. SOURCE_DATE_EPOCH If set, it will be used as the timestamp (as seconds since the epoch) in the deb(5)'s ar(5) container and used to clamp the mtime in the tar(5) file entries. NOTES top Do not attempt to use just dpkg-deb to install software! You must use dpkg proper to ensure that all the files are correctly placed and the package's scripts run and its status and contents recorded. SECURITY top Examining untrusted package archives or extracting them into staging directories should be considered a security boundary, and any breakage of that boundary stemming from these operations should be considered a security vulnerability. But handling untrusted package archives should not be done lightly, as the surface area includes any compression library supported, in addition to the archive formats and control files themselves. Performing these operations over untrusted data as root is strongly discouraged. Building package archives should only be performed over trusted data. BUGS top dpkg-deb -I package1.deb package2.deb does the wrong thing. There is no authentication on .deb files; in fact, there isn't even a straightforward checksum. (Higher level tools like APT support authenticating .deb packages retrieved from a given repository, and most packages nowadays provide an md5sum control file generated by debian/rules. Though this is not directly supported by the lower level tools.) SEE ALSO top /usr/local/share/doc/dpkg/spec/rootless-builds.txt, deb(5), deb-control(5), dpkg(1), dselect(1). COLOPHON top This page is part of the dpkg (Debian Package Manager) project. Information about the project can be found at https://wiki.debian.org/Teams/Dpkg/. If you have a bug report for this manual page, see http://bugs.debian.org/cgi-bin/pkgreport.cgi?src=dpkg. This page was obtained from the project's upstream Git repository git clone https://git.dpkg.org/git/dpkg/dpkg.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-18.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 1.22.0-52-g1af0 2023-08-30 dpkg-deb(1) Pages that refer to this page: dh_builddeb(1), dpkg(1), dpkg-name(1), dpkg-split(1), deb(5), deb-conffiles(5), deb-control(5), deb-md5sums(5), deb-old(5), deb-src-rules(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# dpkg-deb\n\n> Pack, unpack and provide information about Debian archives.\n> More information: <https://manpages.debian.org/latest/dpkg/dpkg-deb.html>.\n\n- Display information about a package:\n\n`dpkg-deb --info {{path/to/file.deb}}`\n\n- Display the package's name and version on one line:\n\n`dpkg-deb --show {{path/to/file.deb}}`\n\n- List the package's contents:\n\n`dpkg-deb --contents {{path/to/file.deb}}`\n\n- Extract package's contents into a directory:\n\n`dpkg-deb --extract {{path/to/file.deb}} {{path/to/directory}}`\n\n- Create a package from a specified directory:\n\n`dpkg-deb --build {{path/to/directory}}`\n
dpkg-query
dpkg-query(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dpkg-query(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMANDS | OPTIONS | EXIT STATUS | ENVIRONMENT | SECURITY | SEE ALSO | COLOPHON dpkg-query(1) dpkg suite dpkg-query(1) NAME top dpkg-query - a tool to query the dpkg database SYNOPSIS top dpkg-query [option...] command DESCRIPTION top dpkg-query is a tool to show information about packages listed in the dpkg database. COMMANDS top -l, --list [package-name-pattern...] List all known packages matching one or more patterns, regardless of their status, which includes any real or virtual package referenced in any dependency relationship field (such as Breaks, Enhances, etc.). If no package-name- pattern is given, list all packages in /usr/local/var/lib/dpkg/status, excluding the ones marked as not-installed (i.e. those which have been previously purged). Normal shell wildcard characters are allowed in package-name-pattern. Please note you will probably have to quote package-name-pattern to prevent the shell from performing filename expansion. For example this will list all package names starting with libc6: dpkg-query -l 'libc6*' The first three columns of the output show the desired action, the package status, and errors, in that order. Desired action: u = Unknown i = Install h = Hold r = Remove p = Purge Package status: n = Not-installed c = Config-files H = Half-installed U = Unpacked F = Half-configured W = Triggers-awaiting t = Triggers-pending i = Installed Error flags: <empty> = (none) R = Reinst-required An uppercase status or error letter indicates the package is likely to cause severe problems. Please refer to dpkg(1) for information about the above states and flags. The output format of this option is not configurable, but varies automatically to fit the terminal width. It is intended for human readers, and is not easily machine- readable. See -W (--show) and --showformat for a way to configure the output format. -W, --show [package-name-pattern...] Just like the --list option this will list all packages matching the given patterns. However the output can be customized using the --showformat option. The default output format gives one line per matching package, each line consisting of the package name and its installed version, separated by a tab. The package name will be architecture qualified for packages with a Multi-Arch field with the value same or with a foreign architecture, which is an architecture that is neither the native one nor all. -s, --status [package-name...] Report status of specified packages. This just displays the entry in the installed package status database. If no package-name is specified it will display all package entries in the status database (since dpkg 1.19.1). When multiple package-name entries are listed, the requested status entries are separated by an empty line, with the same order as specified on the argument list. -L, --listfiles package-name... List files installed to your system from package-name. When multiple package-names are listed, the requested lists of files are separated by an empty line, with the same order as specified on the argument list. Each file diversion is printed on its own line after its diverted file, prefixed with one of the following localized strings: locally diverted to: diverted-to package diverts others to: diverted-to diverted by pkg to: diverted-to Hint: When machine parsing the output, it is customary to set the locale to C.UTF-8 to get reproducible results. On some systems this might also require adapting the LANGUAGE environment variable appropriately if it is already set (see locale(7)). This command will not list extra files created by maintainer scripts, nor will it list alternatives. --control-list package-name List control files installed to your system from package-name (since dpkg 1.16.5). These can be used as input arguments to --control-show. --control-show package-name control-file Print the control-file installed to your system from package- name to the standard output (since dpkg 1.16.5). -c, --control-path package-name [control-file] List paths for control files installed to your system from package-name (since dpkg 1.15.4). If control-file is specified then only list the path for that control file if it is present. Warning: this command is deprecated as it gives direct access to the internal dpkg database, please switch to use --control-list and --control-show instead for all cases where those commands might give the same end result. Although, as long as there is still at least one case where this command is needed (i.e. when having to remove a damaging postrm maintainer script), and while there is no good solution for that, this command will not get removed. -S, --search filename-search-pattern... Search for packages that own files corresponding to the given patterns. Standard shell wildcard characters can be used in the pattern, where asterisk (*) and question mark (?) will match a slash, and backslash (\) will be used as an escape character. If the first character in the filename-search-pattern is none of *[?/ then it will be considered a substring match and will be implicitly surrounded by * (as in *filename-search- pattern*). If the subsequent string contains any of *[?\, then it will handled like a glob pattern, otherwise any trailing / or /. will be removed and a literal path lookup will be performed. This command will not list extra files created by maintainer scripts, nor will it list alternatives. The output format consists of one line per matching pattern, with a list of packages owning the pathname separated by a comma (U+002C ,) and a space (U+0020 ), followed by a colon (U+003A :) and a space, followed by the pathname. As in: pkgname1, pkgname2: pathname1 pkgname3: pathname2 File diversions are printed with the following localized strings: diversion by pkgname from: diverted-from diversion by pkgname to: diverted-to or for local diversions: local diversion from: diverted-from local diversion to: diverted-to Hint: When machine parsing the output, it is customary to set the locale to C.UTF-8 to get reproducible results. -p, --print-avail [package-name...] Display details about packages, as found in /usr/local/var/lib/dpkg/available. If no package-name is specified, it will display all package entries in the available database (since dpkg 1.19.1). When multiple package-name are listed, the requested available entries are separated by an empty line, with the same order as specified on the argument list. Users of APT-based frontends should use apt show package-name instead as the available file is only kept up-to-date when using dselect. -?, --help Show the usage message and exit. --version Show the version and exit. OPTIONS top --admindir=dir Change the location of the dpkg database. The default location is /usr/local/var/lib/dpkg. --root=directory Set the root directory to directory, which sets the administrative directory to directory/usr/local/var/lib/dpkg (since dpkg 1.21.0). --load-avail Also load the available file when using the --show and --list commands, which now default to only querying the status file (since dpkg 1.16.2). --no-pager Disables the use of any pager when showing information (since dpkg 1.19.2). -f, --showformat=format This option is used to specify the format of the output --show will produce (short option since dpkg 1.13.1). The format is a string that will be output for each package listed. In the format string, \ introduces escapes: \n newline \r carriage return \t tab \ before any other character suppresses any special meaning of the following character, which is useful for \ and $. Package information can be included by inserting variable references to package fields using the syntax ${field[;width]}. Fields are printed right-aligned unless the width is negative in which case left alignment will be used. The following fields are recognized but they are not necessarily available in the status file (only internal fields or fields stored in the binary package end up in it): Architecture Bugs Conffiles (internal) Config-Version (internal) Conflicts Breaks Depends Description Enhances Protected Essential Filename (internal, front-end related) Homepage Installed-Size MD5sum (internal, front-end related) MSDOS-Filename (internal, front-end related) Maintainer Origin Package Pre-Depends Priority Provides Recommends Replaces Revision (obsolete) Section Size (internal, front-end related) Source Status (internal) Suggests Tag (usually not in .deb but in repository Packages files) Triggers-Awaited (internal) Triggers-Pending (internal) Version The following are virtual fields, generated by dpkg-query from values from other fields (note that these do not use valid names for fields in control files): binary:Package It contains the binary package name with a possible architecture qualifier like libc6:amd64 (since dpkg 1.16.2). An architecture qualifier will be present to make the package name unambiguous, for packages with a Multi-Arch field with the value same or with a foreign architecture, which is an architecture that is neither the native one nor all. binary:Synopsis It contains the package short description (since dpkg 1.19.1). binary:Summary This is an alias for binary:Synopsis (since dpkg 1.16.2). db:Status-Abbrev It contains the abbreviated package status (as three characters), such as ii or iHR (since dpkg 1.16.2). See the --list command description for more details. db:Status-Want It contains the package wanted status, part of the Status field (since dpkg 1.17.11). db:Status-Status It contains the package status word, part of the Status field (since dpkg 1.17.11). db:Status-Eflag It contains the package status error flag, part of the Status field (since dpkg 1.17.11). db-fsys:Files It contains the list of the package filesystem entries separated by newlines (since dpkg 1.19.3). db-fsys:Last-Modified It contains the timestamp in seconds of the last time the package filesystem entries were modified (since dpkg 1.19.3). source:Package It contains the source package name for this binary package (since dpkg 1.16.2). source:Version It contains the source package version for this binary package (since dpkg 1.16.2) source:Upstream-Version It contains the source package upstream version for this binary package (since dpkg 1.18.16) The default format string is ${binary:Package}\t${Version}\n. Actually, all other fields found in the status file (i.e. user defined fields) can be requested, too. They will be printed as-is, though, no conversion nor error checking is done on them. To get the name of the dpkg maintainer and the installed version, you could run: dpkg-query -f='${binary:Package} ${Version}\t${Maintainer}\n' \ -W dpkg EXIT STATUS top 0 The requested query was successfully performed. 1 The requested query failed either fully or partially, due to no file or package being found (except for --control-path, --control-list and --control-show were such errors are fatal). 2 Fatal or unrecoverable error due to invalid command-line usage, or interactions with the system, such as accesses to the database, memory allocations, etc. ENVIRONMENT top External environment SHELL Sets the program to execute when spawning a command via a shell (since dpkg 1.19.2). PAGER DPKG_PAGER Sets the pager command to use (since dpkg 1.19.1), which will be executed with $SHELL -c. If SHELL is not set, sh will be used instead. The DPKG_PAGER overrides the PAGER environment variable (since dpkg 1.19.2). DPKG_ROOT If set and the --root option has not been specified, it will be used as the filesystem root directory (since dpkg 1.21.0). DPKG_ADMINDIR If set and the --admindir option has not been specified, it will be used as the dpkg data directory. DPKG_DEBUG Sets the debug mask (since dpkg 1.21.10) from an octal value. The currently accepted flags are described in the dpkg --debug option, but not all these flags might have an effect on this program. DPKG_COLORS Sets the color mode (since dpkg 1.18.5). The currently accepted values are: auto (default), always and never. Internal environment LESS Defined by dpkg-query to -FRSXMQ, if not already set, when spawning a pager (since dpkg 1.19.2). To change the default behavior, this variable can be preset to some other value including an empty string, or the PAGER or DPKG_PAGER variables can be set to disable specific options with -+, for example DPKG_PAGER="less -+F". SECURITY top Query operations should never require root, and delegating their execution to unprivileged users via some gain-root command can have security implications (such as privilege escalation), for example when a pager is automatically invoked by the tool. SEE ALSO top dpkg(1). COLOPHON top This page is part of the dpkg (Debian Package Manager) project. Information about the project can be found at https://wiki.debian.org/Teams/Dpkg/. If you have a bug report for this manual page, see http://bugs.debian.org/cgi-bin/pkgreport.cgi?src=dpkg. This page was obtained from the project's upstream Git repository git clone https://git.dpkg.org/git/dpkg/dpkg.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-18.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 1.22.1-2-gddb42 2023-10-30 dpkg-query(1) Pages that refer to this page: dpkg(1), dpkg-deb(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# dpkg-query\n\n> Display information about installed packages.\n> More information: <https://manpages.debian.org/latest/dpkg/dpkg-query.1.html>.\n\n- List all installed packages:\n\n`dpkg-query --list`\n\n- List installed packages matching a pattern:\n\n`dpkg-query --list '{{libc6*}}'`\n\n- List all files installed by a package:\n\n`dpkg-query --listfiles {{libc6}}`\n\n- Show information about a package:\n\n`dpkg-query --status {{libc6}}`\n\n- Search for packages that own files matching a pattern:\n\n`dpkg-query --search {{/etc/ld.so.conf.d}}`\n
dracut
dracut(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dracut(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | USAGE | TROUBLESHOOTING | OPTIONS | ENVIRONMENT | FILES | AVAILABILITY | AUTHORS | SEE ALSO | COLOPHON DRACUT(8) dracut DRACUT(8) NAME top dracut - low-level tool for generating an initramfs/initrd image SYNOPSIS top dracut [OPTION...] [<image> [<kernel version>]] DESCRIPTION top Create an initramfs <image> for the kernel with the version <kernel version>. If <kernel version> is omitted, then the version of the actual running kernel is used. If <image> is omitted or empty, depending on bootloader specification, the default location can be /efi/<machine-id>/<kernel-version>/initrd, /boot/<machine-id>/<kernel-version>/initrd, /boot/efi/<machine-id>/<kernel-version>/initrd, /lib/modules/<kernel-version>/initrd or /boot/initramfs-<kernel-version>.img. dracut creates an initial image used by the kernel for preloading the block device modules (such as IDE, SCSI or RAID) which are needed to access the root filesystem, mounting the root filesystem and booting into the real system. At boot time, the kernel unpacks that archive into RAM disk, mounts and uses it as initial root file system. All finding of the root device happens in this early userspace. Initramfs images are also called "initrd". For a complete list of kernel command line options see dracut.cmdline(7). If you are dropped to an emergency shell, while booting your initramfs, the file /run/initramfs/rdsosreport.txt is created, which can be saved to a (to be mounted by hand) partition (usually /boot) or a USB stick. Additional debugging info can be produced by adding rd.debug to the kernel command line. /run/initramfs/rdsosreport.txt contains all logs and the output of some tools. It should be attached to any report about dracut problems. USAGE top To create a initramfs image, the most simple command is: # dracut This will generate a general purpose initramfs image, with all possible functionality resulting of the combination of the installed dracut modules and system tools. The image, depending on bootloader specification, can be /efi/<machine-id>/<kernel-version>/initrd, /boot/<machine-id>/<kernel-version>/initrd, /boot/efi/<machine-id>/<kernel-version>/initrd, /lib/modules/<kernel-version>/initrd or /boot/initramfs-<kernel-version>.img and contains the kernel modules of the currently active kernel with version <kernel-version>. If the initramfs image already exists, dracut will display an error message, and to overwrite the existing image, you have to use the --force option. # dracut --force If you want to specify another filename for the resulting image you would issue a command like: # dracut foobar.img To generate an image for a specific kernel version, the command would be: # dracut foobar.img 2.6.40-1.rc5.f20 A shortcut to generate the image at the default location for a specific kernel version is: # dracut --kver 2.6.40-1.rc5.f20 If you want to create lighter, smaller initramfs images, you may want to specify the --hostonly or -H option. Using this option, the resulting image will contain only those dracut modules, kernel modules and filesystems, which are needed to boot this specific machine. This has the drawback, that you cant put the disk on another controller or machine, and that you cant switch to another root filesystem, without recreating the initramfs image. The usage of the --hostonly option is only for experts and you will have to keep the broken pieces. At least keep a copy of a general purpose image (and corresponding kernel) as a fallback to rescue your system. Inspecting the Contents To see the contents of the image created by dracut, you can use the lsinitrd tool. # lsinitrd | less To display the contents of a file in the initramfs also use the lsinitrd tool: # lsinitrd -f /etc/ld.so.conf include ld.so.conf.d/*.conf Adding dracut Modules Some dracut modules are turned off by default and have to be activated manually. You can do this by adding the dracut modules to the configuration file /etc/dracut.conf or /etc/dracut.conf.d/myconf.conf. See dracut.conf(5). You can also add dracut modules on the command line by using the -a or --add option: # dracut --add module initramfs-module.img To see a list of available dracut modules, use the --list-modules option: # dracut --list-modules Omitting dracut Modules Sometimes you dont want a dracut module to be included for reasons of speed, size or functionality. To do this, either specify the omit_dracutmodules variable in the dracut.conf or /etc/dracut.conf.d/myconf.conf configuration file (see dracut.conf(5)), or use the -o or --omit option on the command line: # dracut -o "multipath lvm" no-multipath-lvm.img Adding Kernel Modules If you need a special kernel module in the initramfs, which is not automatically picked up by dracut, you have the use the --add-drivers option on the command line or the drivers variable in the /etc/dracut.conf or /etc/dracut.conf.d/myconf.conf configuration file (see dracut.conf(5)): # dracut --add-drivers mymod initramfs-with-mymod.img Boot parameters An initramfs generated without the "hostonly" mode, does not contain any system configuration files (except for some special exceptions), so the configuration has to be done on the kernel command line. With this flexibility, you can easily boot from a changed root partition, without the need to recompile the initramfs image. So, you could completely change your root partition (move it inside a md raid with encryption and LVM on top), as long as you specify the correct filesystem LABEL or UUID on the kernel command line for your root device, dracut will find it and boot from it. The kernel command line can also be provided by the dhcp server with the root-path option. See the section called Network Boot. For a full reference of all kernel command line parameters, see dracut.cmdline(7). To get a quick start for the suitable kernel command line on your system, use the --print-cmdline option: # dracut --print-cmdline root=UUID=8b8b6f91-95c7-4da2-831b-171e12179081 rootflags=rw,relatime,discard,data=ordered rootfstype=ext4 Specifying the root Device This is the only option dracut really needs to boot from your root partition. Because your root partition can live in various environments, there are a lot of formats for the root= option. The most basic one is root=<path to device node>: root=/dev/sda2 Because device node names can change, dependent on the drive ordering, you are encouraged to use the filesystem identifier (UUID) or filesystem label (LABEL) to specify your root partition: root=UUID=19e9dda3-5a38-484d-a9b0-fa6b067d0331 or root=LABEL=myrootpartitionlabel To see all UUIDs or LABELs on your system, do: # ls -l /dev/disk/by-uuid or # ls -l /dev/disk/by-label If your root partition is on the network see the section called Network Boot. Keyboard Settings If you have to input passwords for encrypted disk volumes, you might want to set the keyboard layout and specify a display font. A typical german kernel command line would contain: rd.vconsole.font=eurlatgr rd.vconsole.keymap=de-latin1-nodeadkeys rd.locale.LANG=de_DE.UTF-8 Setting these options can override the setting stored on your system, if you use a modern init system, like systemd. Blacklisting Kernel Modules Sometimes it is required to prevent the automatic kernel module loading of a specific kernel module. To do this, just add rd.driver.blacklist=<kernel module name>, with <kernel module name> not containing the .ko suffix, to the kernel command line. For example: rd.driver.blacklist=mptsas rd.driver.blacklist=nouveau The option can be specified multiple times on the kernel command line. Speeding up the Boot Process If you want to speed up the boot process, you can specify as much information for dracut on the kernel command as possible. For example, you can tell dracut, that you root partition is not on a LVM volume or not on a raid partition, or that it lives inside a specific crypto LUKS encrypted volume. By default, dracut searches everywhere. A typical dracut kernel command line for a plain primary or logical partition would contain: rd.luks=0 rd.lvm=0 rd.md=0 rd.dm=0 This turns off every automatic assembly of LVM, MD raids, DM raids and crypto LUKS. Of course, you could also omit the dracut modules in the initramfs creation process, but then you would lose the possibility to turn it on on demand. Injecting custom Files To add your own files to the initramfs image, you have several possibilities. The --include option let you specify a source path and a target path. For example # dracut --include cmdline-preset /etc/cmdline.d/mycmdline.conf initramfs-cmdline-pre.img will create an initramfs image, where the file cmdline-preset will be copied inside the initramfs to /etc/cmdline.d/mycmdline.conf. --include can only be specified once. # mkdir -p rd.live.overlay/etc/cmdline.d # mkdir -p rd.live.overlay/etc/conf.d # echo "ip=dhcp" >> rd.live.overlay/etc/cmdline.d/mycmdline.conf # echo export FOO=testtest >> rd.live.overlay/etc/conf.d/testvar.conf # echo export BAR=testtest >> rd.live.overlay/etc/conf.d/testvar.conf # tree rd.live.overlay/ rd.live.overlay/ `-- etc |-- cmdline.d | `-- mycmdline.conf `-- conf.d `-- testvar.conf # dracut --include rd.live.overlay / initramfs-rd.live.overlay.img This will put the contents of the rd.live.overlay directory into the root of the initramfs image. The --install option let you specify several files, which will get installed in the initramfs image at the same location, as they are present on initramfs creation time. # dracut --install 'strace fsck.ext4 ssh' initramfs-dbg.img This will create an initramfs with the strace, fsck.ext4 and ssh executables, together with the libraries needed to start those. The --install option can be specified multiple times. Network Boot If your root partition is on a network drive, you have to have the network dracut modules installed to create a network aware initramfs image. If you specify ip=dhcp on the kernel command line, then dracut asks a dhcp server about the ip address for the machine. The dhcp server can also serve an additional root-path, which will set the root device for dracut. With this mechanism, you have static configuration on your client machine and a centralized boot configuration on your TFTP/DHCP server. If you cant pass a kernel command line, then you can inject /etc/cmdline.d/mycmdline.conf, with a method described in the section called Injecting custom Files. Reducing the Image Size To reduce the size of the initramfs, you should create it with by omitting all dracut modules, which you know, you dont need to boot the machine. You can also specify the exact dracut and kernel modules to produce a very tiny initramfs image. For example for a NFS image, you would do: # dracut -m "nfs network base" initramfs-nfs-only.img Then you would boot from this image with your target machine and reduce the size once more by creating it on the target machine with the --host-only option: # dracut -m "nfs network base" --host-only initramfs-nfs-host-only.img This will reduce the size of the initramfs image significantly. TROUBLESHOOTING top If the boot process does not succeed, you have several options to debug the situation. Identifying your problem area 1. Remove 'rhgb' and 'quiet' from the kernel command line 2. Add 'rd.shell' to the kernel command line. This will present a shell should dracut be unable to locate your root device 3. Add 'rd.shell rd.debug log_buf_len=1M' to the kernel command line so that dracut shell commands are printed as they are executed 4. The file /run/initramfs/rdsosreport.txt is generated, which contains all the logs and the output of all significant tools, which are mentioned later. If you want to save that output, simply mount /boot by hand or insert an USB stick and mount that. Then you can store the output for later inspection. Information to include in your report All bug reports In all cases, the following should be mentioned and attached to your bug report: The exact kernel command-line used. Typically from the bootloader configuration file (e.g. /boot/grub2/grub.cfg) or from /proc/cmdline. A copy of your disk partition information from /etc/fstab, which might be obtained booting an old working initramfs or a rescue medium. Turn on dracut debugging (see the debugging dracut section), and attach the file /run/initramfs/rdsosreport.txt. If you use a dracut configuration file, please include /etc/dracut.conf and all files in /etc/dracut.conf.d/*.conf Network root device related problems This section details information to include when experiencing problems on a system whose root device is located on a network attached volume (e.g. iSCSI, NFS or NBD). As well as the information from the section called All bug reports, include the following information: Please include the output of # /sbin/ifup <interfacename> # ip addr show Debugging dracut Configure a serial console Successfully debugging dracut will require some form of console logging during the system boot. This section documents configuring a serial console connection to record boot messages. 1. First, enable serial console output for both the kernel and the bootloader. 2. Open the file /boot/grub2/grub.cfg for editing. Below the line 'timeout=5', add the following: serial --unit=0 --speed=9600 terminal --timeout=5 serial console 3. Also in /boot/grub2/grub.cfg, add the following boot arguments to the 'kernel' line: console=tty0 console=ttyS0,9600 4. When finished, the /boot/grub2/grub.cfg file should look similar to the example below. default=0 timeout=5 serial --unit=0 --speed=9600 terminal --timeout=5 serial console title Fedora (2.6.29.5-191.fc11.x86_64) root (hd0,0) kernel /vmlinuz-2.6.29.5-191.fc11.x86_64 ro root=/dev/mapper/vg_uc1-lv_root console=tty0 console=ttyS0,9600 initrd /dracut-2.6.29.5-191.fc11.x86_64.img 5. More detailed information on how to configure the kernel for console output can be found at http://www.faqs.org/docs/Linux-HOWTO/Remote-Serial-Console-HOWTO.html#CONFIGURE-KERNEL . 6. Redirecting non-interactive output Note You can redirect all non-interactive output to /dev/kmsg and the kernel will put it out on the console when it reaches the kernel buffer by doing # exec >/dev/kmsg 2>&1 </dev/console Using the dracut shell dracut offers a shell for interactive debugging in the event dracut fails to locate your root filesystem. To enable the shell: 1. Add the boot parameter 'rd.shell' to your bootloader configuration file (e.g. /boot/grub2/grub.cfg) 2. Remove the boot arguments 'rhgb' and 'quiet' A sample /boot/grub2/grub.cfg bootloader configuration file is listed below. default=0 timeout=5 serial --unit=0 --speed=9600 terminal --timeout=5 serial console title Fedora (2.6.29.5-191.fc11.x86_64) root (hd0,0) kernel /vmlinuz-2.6.29.5-191.fc11.x86_64 ro root=/dev/mapper/vg_uc1-lv_root console=tty0 rd.shell initrd /dracut-2.6.29.5-191.fc11.x86_64.img 3. If system boot fails, you will be dropped into a shell as seen in the example below. No root device found Dropping to debug shell. # 4. Use this shell prompt to gather the information requested above (see the section called All bug reports). Accessing the root volume from the dracut shell From the dracut debug shell, you can manually perform the task of locating and preparing your root volume for boot. The required steps will depend on how your root volume is configured. Common scenarios include: A block device (e.g. /dev/sda7) A LVM logical volume (e.g. /dev/VolGroup00/LogVol00) An encrypted device (e.g. /dev/mapper/luks-4d5972ea-901c-4584-bd75-1da802417d83) A network attached device (e.g. netroot=iscsi:@192.168.0.4::3260::iqn.2009-02.org.example:for.all) The exact method for locating and preparing will vary. However, to continue with a successful boot, the objective is to locate your root volume and create a symlink /dev/root which points to the file system. For example, the following example demonstrates accessing and booting a root volume that is an encrypted LVM Logical volume. 1. Inspect your partitions using parted # parted /dev/sda -s p Model: ATA HTS541060G9AT00 (scsi) Disk /dev/sda: 60.0GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 10.8GB 107MB primary ext4 boot 2 10.8GB 55.6GB 44.7GB logical lvm 2. You recall that your root volume was a LVM logical volume. Scan and activate any logical volumes. # lvm vgscan # lvm vgchange -ay 3. You should see any logical volumes now using the command blkid: # blkid /dev/sda1: UUID="3de247f3-5de4-4a44-afc5-1fe179750cf7" TYPE="ext4" /dev/sda2: UUID="Ek4dQw-cOtq-5MJu-OGRF-xz5k-O2l8-wdDj0I" TYPE="LVM2_member" /dev/mapper/linux-root: UUID="def0269e-424b-4752-acf3-1077bf96ad2c" TYPE="crypto_LUKS" /dev/mapper/linux-home: UUID="c69127c1-f153-4ea2-b58e-4cbfa9257c5e" TYPE="ext4" /dev/mapper/linux-swap: UUID="47b4d329-975c-4c08-b218-f9c9bf3635f1" TYPE="swap" 4. From the output above, you recall that your root volume exists on an encrypted block device. Following the guidance disk encryption guidance from the Installation Guide, you unlock your encrypted root volume. # UUID=$(cryptsetup luksUUID /dev/mapper/linux-root) # cryptsetup luksOpen /dev/mapper/linux-root luks-$UUID Enter passphrase for /dev/mapper/linux-root: Key slot 0 unlocked. 5. Next, make a symbolic link to the unlocked root volume # ln -s /dev/mapper/luks-$UUID /dev/root 6. With the root volume available, you may continue booting the system by exiting the dracut shell # exit Additional dracut boot parameters For more debugging options, see dracut.cmdline(7). Debugging dracut on shutdown To debug the shutdown sequence on systemd systems, you can rd.break on pre-shutdown or shutdown. To do this from an already booted system: # mkdir -p /run/initramfs/etc/cmdline.d # echo "rd.debug rd.break=pre-shutdown rd.break=shutdown" > /run/initramfs/etc/cmdline.d/debug.conf # touch /run/initramfs/.need_shutdown This will give you a dracut shell after the system pivoted back in the initramfs. OPTIONS top --kver <kernel version> Set the kernel version. This enables to specify the kernel version, without specifying the location of the initramfs image. For example: # dracut --kver 3.5.0-0.rc7.git1.2.fc18.x86_64 -f, --force Overwrite existing initramfs file. <output file> --rebuild Append the current arguments to those with which the input initramfs image was built. This option helps in incrementally building the initramfs for testing. If optional <output file> is not provided, the input initramfs provided to rebuild will be used as output file. -a, --add <list of dracut modules> Add a space-separated list of dracut modules to the default set of modules. This parameter can be specified multiple times. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --add "module1 module2" ... --force-add <list of dracut modules> Force to add a space-separated list of dracut modules to the default set of modules, when -H is specified. This parameter can be specified multiple times. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --force-add "module1 module2" ... -o, --omit <list of dracut modules> Omit a space-separated list of dracut modules. This parameter can be specified multiple times. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --omit "module1 module2" ... -m, --modules <list of dracut modules> Specify a space-separated list of dracut modules to call when building the initramfs. Modules are located in /usr/lib/dracut/modules.d. This parameter can be specified multiple times. This option forces dracut to only include the specified dracut modules. In most cases the "--add" option is what you want to use. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --modules "module1 module2" ... -d, --drivers <list of kernel modules> Specify a space-separated list of kernel modules to exclusively include in the initramfs. The kernel modules have to be specified without the ".ko" suffix. This parameter can be specified multiple times. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --drivers "kmodule1 kmodule2" ... --add-drivers <list of kernel modules> Specify a space-separated list of kernel modules to add to the initramfs. The kernel modules have to be specified without the ".ko" suffix. This parameter can be specified multiple times. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --add-drivers "kmodule1 kmodule2" ... --force-drivers <list of kernel modules> See add-drivers above. But in this case it is ensured that the drivers are tried to be loaded early via modprobe. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --force-drivers "kmodule1 kmodule2" ... --omit-drivers <list of kernel modules> Specify a space-separated list of kernel modules not to add to the initramfs. The kernel modules have to be specified without the ".ko" suffix. This parameter can be specified multiple times. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --omit-drivers "kmodule1 kmodule2" ... --filesystems <list of filesystems> Specify a space-separated list of kernel filesystem modules to exclusively include in the generic initramfs. This parameter can be specified multiple times. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --filesystems "filesystem1 filesystem2" ... -k, --kmoddir <kernel directory> Specify the directory, where to look for kernel modules. --fwdir <dir>[:<dir>...]++ Specify additional directories, where to look for firmwares. This parameter can be specified multiple times. --libdirs <list of directories> Specify a space-separated list of directories to look for libraries to include in the generic initramfs. This parameter can be specified multiple times. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --libdirs "dir1 dir2" ... --kernel-cmdline <parameters> Specify default kernel command line parameters. --kernel-only Only install kernel drivers and firmware files. --no-kernel Do not install kernel drivers and firmware files. --early-microcode Combine early microcode with ramdisk. --no-early-microcode Do not combine early microcode with ramdisk. --print-cmdline Print the kernel command line for the current disk layout. --mdadmconf Include local /etc/mdadm.conf file. --nomdadmconf Do not include local /etc/mdadm.conf file. --lvmconf Include local /etc/lvm/lvm.conf file. --nolvmconf Do not include local /etc/lvm/lvm.conf file. --fscks <list of fsck tools> Add a space-separated list of fsck tools, in addition to dracut.conf's specification; the installation is opportunistic (non-existing tools are ignored). Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --fscks "fsck.foo barfsck" ... --nofscks Inhibit installation of any fsck tools. --strip Strip binaries in the initramfs (default). --aggressive-strip Strip more than just debug symbol and sections, for a smaller initramfs build. The --strip option must also be specified. --nostrip Do not strip binaries in the initramfs. --hardlink Hardlink files in the initramfs (default). --nohardlink Do not hardlink files in the initramfs. --prefix <dir> Prefix initramfs files with the specified directory. --noprefix Do not prefix initramfs files (default). -h, --help Display help text and exit. --debug Output debug information of the build process. -v, --verbose Increase verbosity level (default is info(4)). --version Display version and exit. -q, --quiet Decrease verbosity level (default is info(4)). -c, --conf <dracut configuration file> Specify configuration file to use. Default: /etc/dracut.conf --confdir <configuration directory> Specify configuration directory to use. Default: /etc/dracut.conf.d --tmpdir <temporary directory> Specify temporary directory to use. Default: /var/tmp -r, --sysroot <sysroot directory> Specify the sysroot directory to collect files from. This is useful to create the initramfs image from a cross-compiled sysroot directory. For the extra helper variables, see ENVIRONMENT below. Default: empty --sshkey <sshkey file> SSH key file used with ssh-client module. --logfile <logfile> Logfile to use; overrides any setting from the configuration files. Default: /var/log/dracut.log -l, --local Activates the local mode. dracut will use modules from the current working directory instead of the system-wide installed modules in /usr/lib/dracut/modules.d. This is useful when running dracut from a git checkout. -H, --hostonly Host-only mode: Install only what is needed for booting the local host instead of a generic host and generate host-specific configuration. Warning If chrooted to another root other than the real root device, use "--fstab" and provide a valid /etc/fstab. -N, --no-hostonly Disable host-only mode. --hostonly-mode <mode> Specify the host-only mode to use. <mode> could be one of "sloppy" or "strict". In "sloppy" host-only mode, extra drivers and modules will be installed, so minor hardware change wont make the image unbootable (e.g. changed keyboard), and the image is still portable among similar hosts. With "strict" mode enabled, anything not necessary for booting the local host in its current state will not be included, and modules may do some extra job to save more space. Minor change of hardware or environment could make the image unbootable. Default: sloppy --hostonly-cmdline Store kernel command line arguments needed in the initramfs. --no-hostonly-cmdline Do not store kernel command line arguments needed in the initramfs. --no-hostonly-default-device Do not generate implicit host devices like root, swap, fstab, etc. Use "--mount" or "--add-device" to explicitly add devices as needed. --hostonly-i18n Install only needed keyboard and font files according to the host configuration (default). --no-hostonly-i18n Install all keyboard and font files available. --hostonly-nics <list of nics> Only enable listed NICs in the initramfs. The list can be empty, so other modules can install only the necessary network drivers. --persistent-policy <policy> Use <policy> to address disks and partitions. <policy> can be any directory name found in /dev/disk (e.g. "by-uuid", "by-label"), or "mapper" to use /dev/mapper device names (default). --fstab Use /etc/fstab instead of /proc/self/mountinfo. --add-fstab <filename> Add entries of <filename> to the initramfs /etc/fstab. --mount "<device> <mountpoint> <filesystem type> [<filesystem options> [<dump frequency> [<fsck order>]]]" Mount <device> on <mountpoint> with <filesystem type> in the initramfs. <filesystem options>, <dump options> and <fsck order> can be specified, see fstab manpage for the details. The default <filesystem options> is "defaults". The default <dump frequency> is "0". The default <fsck order> is "2". --mount "<mountpoint>" Like above, but <device>, <filesystem type> and <filesystem options> are determined by looking at the current mounts. --add-device <device> Bring up <device> in initramfs, <device> should be the device name. This can be useful in host-only mode for resume support when your swap is on LVM or an encrypted partition. [NB --device can be used for compatibility with earlier releases] -i, --include <SOURCE> <TARGET> Include the files in the SOURCE directory into the TARGET directory in the final initramfs. If SOURCE is a file, it will be installed to TARGET in the final initramfs. This parameter can be specified multiple times. -I, --install <file list> Install the space separated list of files into the initramfs. Note If the list has multiple arguments, then you have to put these in quotes. For example: # dracut --install "/bin/foo /sbin/bar" ... --install-optional <file list> Install the space separated list of files into the initramfs, if they exist. --gzip Compress the generated initramfs using gzip. This will be done by default, unless another compression option or --no-compress is passed. Equivalent to "--compress=gzip -9". --bzip2 Compress the generated initramfs using bzip2. Warning Make sure your kernel has bzip2 decompression support compiled in, otherwise you will not be able to boot. Equivalent to "--compress=bzip2 -9". --lzma Compress the generated initramfs using lzma. Warning Make sure your kernel has lzma decompression support compiled in, otherwise you will not be able to boot. Equivalent to "--compress=lzma -9 -T0". --xz Compress the generated initramfs using xz. Warning Make sure your kernel has xz decompression support compiled in, otherwise you will not be able to boot. Equivalent to "--compress=xz --check=crc32 --lzma2=dict=1MiB -T0". --lzo Compress the generated initramfs using lzop. Warning Make sure your kernel has lzo decompression support compiled in, otherwise you will not be able to boot. Equivalent to "--compress=lzop -9". --lz4 Compress the generated initramfs using lz4. Warning Make sure your kernel has lz4 decompression support compiled in, otherwise you will not be able to boot. Equivalent to "--compress=lz4 -l -9". --zstd Compress the generated initramfs using Zstandard. Warning Make sure your kernel has zstd decompression support compiled in, otherwise you will not be able to boot. Equivalent to "--compress=zstd -15 -q -T0". --compress <compressor> Compress the generated initramfs using the passed compression program. If you pass it just the name of a compression program, it will call that program with known-working arguments. If you pass a quoted string with arguments, it will be called with exactly those arguments. Depending on what you pass, this may result in an initramfs that the kernel cannot decompress. The default value can also be set via the INITRD_COMPRESS environment variable. --squash-compressor <compressor> Compress the squashfs image using the passed compressor and compressor specific options for mksquashfs. You can refer to mksquashfs manual for supported compressors and compressor specific options. If squash module is not called when building the initramfs, this option will not take effect. --no-compress Do not compress the generated initramfs. This will override any other compression options. --reproducible Create reproducible images. --no-reproducible Do not create reproducible images. --list-modules List all available dracut modules. -M, --show-modules Print included modules name to standard output during build. --keep Keep the initramfs temporary directory for debugging purposes. --printsize Print out the module install size. --profile Output profile information of the build process. --ro-mnt Mount / and /usr read-only by default. -L, --stdlog <level> [0-6] Specify logging level (to standard error). 0 - suppress any messages 1 - only fatal errors 2 - all errors 3 - warnings 4 - info 5 - debug info (here starts lots of output) 6 - trace info (and even more) --regenerate-all Regenerate all initramfs images at the default location with the kernel versions found on the system. Additional parameters are passed through. -p, --parallel Try to execute tasks in parallel. Currently only supported with --regenerate-all (build initramfs images for all kernel versions simultaneously). --noimageifnotneeded Do not create an image in host-only mode, if no kernel driver is needed and no /etc/cmdline/*.conf will be generated into the initramfs. --loginstall <directory> Log all files installed from the host to <directory>. --uefi Instead of creating an initramfs image, dracut will create an UEFI executable, which can be executed by an UEFI BIOS. The default output filename is <EFI>/EFI/Linux/linux-$kernel$-<MACHINE_ID>-<BUILD_ID>.efi. <EFI> might be /efi, /boot or /boot/efi depending on where the ESP partition is mounted. The <BUILD_ID> is taken from BUILD_ID in /usr/lib/os-release or if it exists /etc/os-release and is left out, if BUILD_ID is non-existent or empty. --no-uefi Disables UEFI mode. --no-machineid Affects the default output filename of --uefi and will discard the <MACHINE_ID> part. --uefi-stub <file> Specifies the UEFI stub loader, which will load the attached kernel, initramfs and kernel command line and boots the kernel. The default is $prefix/lib/systemd/boot/efi/linux<EFI-MACHINE-TYPE-NAME>.efi.stub. --uefi-splash-image <file> Specifies the UEFI stub loaders splash image. Requires bitmap (.bmp) image format. --kernel-image <file> Specifies the kernel image, which to include in the UEFI executable. The default is /lib/modules/<KERNEL-VERSION>/vmlinuz or /boot/vmlinuz-<KERNEL-VERSION>. --sbat <parameters> Specifies the SBAT parameters, which to include in the UEFI executable. By default the default SBAT string added is "sbat,1,SBAT Version,sbat,1, https://github.com/rhboot/shim/blob/main/SBAT.md ". --enhanced-cpio Attempt to use the dracut-cpio binary, which optimizes archive creation for copy-on-write filesystems by using the copy_file_range(2) syscall via Rusts io::copy(). When specified, initramfs archives are also padded to ensure optimal data alignment for extent sharing. To retain reflink data deduplication benefits, this should be used alongside the --no-compress and --nostrip parameters, with initramfs source files, --tmpdir staging area and destination all on the same copy-on-write capable filesystem. ENVIRONMENT top INITRD_COMPRESS sets the default compression program. See --compress. DRACUT_LDCONFIG sets the ldconfig program path and options. Optional. Used for --sysroot. Default: ldconfig DRACUT_LDD sets the ldd program path and options. Optional. Used for --sysroot. Default: ldd DRACUT_TESTBIN sets the initially tested binary for detecting library paths. Optional. Used for --sysroot. In the cross-compiled sysroot, the default value (/bin/sh) is unusable, as it is an absolute symlink and points outside the sysroot directory. Default: /bin/sh DRACUT_INSTALL overrides path and options for executing dracut-install internally. Optional. Can be used to debug dracut-install while running the main dracut script. Default: dracut-install Example: DRACUT_INSTALL="valgrind dracut-install" DRACUT_COMPRESS_BZIP2, DRACUT_COMPRESS_LBZIP2, DRACUT_COMPRESS_LZMA, DRACUT_COMPRESS_XZ, DRACUT_COMPRESS_GZIP, DRACUT_COMPRESS_PIGZ, DRACUT_COMPRESS_LZOP, DRACUT_COMPRESS_ZSTD, DRACUT_COMPRESS_LZ4, DRACUT_COMPRESS_CAT overrides for compression utilities to support using them from non-standard paths. Default values are the default compression utility names to be found in PATH. DRACUT_ARCH overrides the value of uname -m. Used for --sysroot. Default: empty (the value of uname -m on the host system) SYSTEMD_VERSION overrides systemd version. Used for --sysroot. SYSTEMCTL overrides the systemctl binary. Used for --sysroot. NM_VERSION overrides the NetworkManager version. Used for --sysroot. DRACUT_INSTALL_PATH overrides PATH environment for dracut-install to look for binaries relative to --sysroot. In a cross-compiled environment (e.g. Yocto), PATH points to natively built binaries that are not in the hosts /bin, /usr/bin, etc. dracut-install still needs plain /bin and /usr/bin that are relative to the cross-compiled sysroot. Default: PATH DRACUT_INSTALL_LOG_TARGET overrides DRACUT_LOG_TARGET for dracut-install. It allows running dracut-install* to run with different log target that dracut** runs with. Default: DRACUT_LOG_TARGET DRACUT_INSTALL_LOG_LEVEL overrides DRACUT_LOG_LEVEL for dracut-install. It allows running dracut-install* to run with different log level that dracut** runs with. Default: DRACUT_LOG_LEVEL FILES top /var/log/dracut.log logfile of initramfs image creation /tmp/dracut.log logfile of initramfs image creation, if /var/log/dracut.log is not writable /etc/dracut.conf see dracut.conf5 /etc/dracut.conf.d/*.conf see dracut.conf5 /usr/lib/dracut/dracut.conf.d/*.conf see dracut.conf5 Configuration in the initramfs /etc/conf.d/ Any files found in /etc/conf.d/ will be sourced in the initramfs to set initial values. Command line options will override these values set in the configuration files. /etc/cmdline Can contain additional command line options. Deprecated, better use /etc/cmdline.d/*.conf. /etc/cmdline.d/*.conf Can contain additional command line options. AVAILABILITY top The dracut command is part of the dracut package and is available from https://github.com/dracutdevs/dracut AUTHORS top Harald Hoyer Victor Lowther Amadeusz onowski Hannes Reinecke Daniel Molkentin Will Woods Philippe Seewer Warren Togami SEE ALSO top dracut.cmdline(7) dracut.conf(5) lsinitrd(1) COLOPHON top This page is part of the dracut (event driven initramfs infrastructure) project. Information about the project can be found at https://dracut.wiki.kernel.org/index.php/Main_Page. If you have a bug report for this manual page, send it to initramfs@vger.kernel.org. This page was obtained from the project's upstream Git repository https://github.com/dracutdevs/dracut.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-18.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org dracut 059-204-g6acfecae 09/13/2023 DRACUT(8) Pages that refer to this page: localectl(1), lsinitrd(1), dracut.conf(5), bootup(7), dracut.bootup(7), dracut.cmdline(7), dracut.modules(7), dracut-catimages(8), systemd-network-generator.service(8), systemtap-service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# dracut\n\n> Generate initramfs images to boot the Linux kernel.\n> Dracut uses options from configuration files in `/etc/dracut.conf`, `/etc/dracut.conf.d/*.conf` and `/usr/lib/dracut/dracut.conf.d/*.conf` by default.\n> More information: <https://github.com/dracutdevs/dracut/wiki>.\n\n- Generate an initramfs image for the current kernel without overriding any options:\n\n`dracut`\n\n- Generate an initramfs image for the current kernel and overwrite the existing one:\n\n`dracut --force`\n\n- Generate an initramfs image for a specific kernel:\n\n`dracut --kver {{kernel_version}}`\n\n- List available modules:\n\n`dracut --list-modules`\n
du
du(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training du(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | PATTERNS | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DU(1) User Commands DU(1) NAME top du - estimate file space usage SYNOPSIS top du [OPTION]... [FILE]... du [OPTION]... --files0-from=F DESCRIPTION top Summarize device usage of the set of FILEs, recursively for directories. Mandatory arguments to long options are mandatory for short options too. -0, --null end each output line with NUL, not newline -a, --all write counts for all files, not just directories --apparent-size print apparent sizes rather than device usage; although the apparent size is usually smaller, it may be larger due to holes in ('sparse') files, internal fragmentation, indirect blocks, and the like -B, --block-size=SIZE scale sizes by SIZE before printing them; e.g., '-BM' prints sizes in units of 1,048,576 bytes; see SIZE format below -b, --bytes equivalent to '--apparent-size --block-size=1' -c, --total produce a grand total -D, --dereference-args dereference only symlinks that are listed on the command line -d, --max-depth=N print the total for a directory (or file, with --all) only if it is N or fewer levels below the command line argument; --max-depth=0 is the same as --summarize --files0-from=F summarize device usage of the NUL-terminated file names specified in file F; if F is -, then read names from standard input -H equivalent to --dereference-args (-D) -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G) --inodes list inode usage information instead of block usage -k like --block-size=1K -L, --dereference dereference all symbolic links -l, --count-links count sizes many times if hard linked -m like --block-size=1M -P, --no-dereference don't follow any symbolic links (this is the default) -S, --separate-dirs for directories do not include size of subdirectories --si like -h, but use powers of 1000 not 1024 -s, --summarize display only a total for each argument -t, --threshold=SIZE exclude entries smaller than SIZE if positive, or entries greater than SIZE if negative --time show time of the last modification of any file in the directory, or any of its subdirectories --time=WORD show time as WORD instead of modification time: atime, access, use, ctime or status --time-style=STYLE show times using STYLE, which can be: full-iso, long-iso, iso, or +FORMAT; FORMAT is interpreted like in 'date' -X, --exclude-from=FILE exclude files that match any pattern in FILE --exclude=PATTERN exclude files that match PATTERN -x, --one-file-system skip directories on different file systems --help display this help and exit --version output version information and exit Display values are in units of the first available SIZE from --block-size, and the DU_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables. Otherwise, units default to 1024 bytes (or 512 if POSIXLY_CORRECT is set). The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. PATTERNS top PATTERN is a shell pattern (not a regular expression). The pattern ? matches any one character, whereas * matches any string (composed of zero, one or multiple characters). For example, *.o will match any files whose names end in .o. Therefore, the command du --exclude='*.o' will skip all files and subdirectories ending in .o (including the file .o itself). AUTHOR top Written by Torbjorn Granlund, David MacKenzie, Paul Eggert, and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/du> or available locally via: info '(coreutils) du invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 DU(1) Pages that refer to this page: tmpfs(5), symlink(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# du\n\n> Disk usage: estimate and summarize file and directory space usage.\n> More information: <https://www.gnu.org/software/coreutils/du>.\n\n- List the sizes of a directory and any subdirectories, in the given unit (B/KiB/MiB):\n\n`du -{{b|k|m}} {{path/to/directory}}`\n\n- List the sizes of a directory and any subdirectories, in human-readable form (i.e. auto-selecting the appropriate unit for each size):\n\n`du -h {{path/to/directory}}`\n\n- Show the size of a single directory, in human-readable units:\n\n`du -sh {{path/to/directory}}`\n\n- List the human-readable sizes of a directory and of all the files and directories within it:\n\n`du -ah {{path/to/directory}}`\n\n- List the human-readable sizes of a directory and any subdirectories, up to N levels deep:\n\n`du -h --max-depth=N {{path/to/directory}}`\n\n- List the human-readable size of all `.jpg` files in subdirectories of the current directory, and show a cumulative total at the end:\n\n`du -ch {{*/*.jpg}}`\n
dumpe2fs
dumpe2fs(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dumpe2fs(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT CODE | BUGS | AUTHOR | AVAILABILITY | SEE ALSO | COLOPHON DUMPE2FS(8) System Manager's Manual DUMPE2FS(8) NAME top dumpe2fs - dump ext2/ext3/ext4 file system information SYNOPSIS top dumpe2fs [ -bfghixV ] [ -o superblock=superblock ] [ -o blocksize=blocksize ] device DESCRIPTION top dumpe2fs prints the super block and blocks group information for the file system present on device. Note: When used with a mounted file system, the printed information may be old or inconsistent. OPTIONS top -b print the blocks which are reserved as bad in the file system. -o superblock=superblock use the block superblock when examining the file system. This option is not usually needed except by a file system wizard who is examining the remains of a very badly corrupted file system. -o blocksize=blocksize use blocks of blocksize bytes when examining the file system. This option is not usually needed except by a file system wizard who is examining the remains of a very badly corrupted file system. -f force dumpe2fs to display a file system even though it may have some file system feature flags which dumpe2fs may not understand (and which can cause some of dumpe2fs's display to be suspect). -g display the group descriptor information in a machine readable colon-separated value format. The fields displayed are the group number; the number of the first block in the group; the superblock location (or -1 if not present); the range of blocks used by the group descriptors (or -1 if not present); the block bitmap location; the inode bitmap location; and the range of blocks used by the inode table. -h only display the superblock information and not any of the block group descriptor detail information. -i display the file system data from an image file created by e2image, using device as the pathname to the image file. -m If the mmp feature is enabled on the file system, check if device is in use by another node, see e2mmpstatus(8) for full details. If used together with the -i option, only the MMP block information is printed. -x print the detailed group information block numbers in hexadecimal format -V print the version number of dumpe2fs and exit. EXIT CODE top dumpe2fs exits with a return code of 0 if the operation completed without errors. It will exit with a non-zero return code if there are any errors, such as problems reading a valid superblock, bad checksums, or if the device is in use by another node and -m is specified. BUGS top You may need to know the physical file system structure to understand the output. AUTHOR top dumpe2fs was written by Remy Card <Remy.Card@linux.org>. It is currently being maintained by Theodore Ts'o <tytso@alum.mit.edu>. AVAILABILITY top dumpe2fs is part of the e2fsprogs package and is available from http://e2fsprogs.sourceforge.net. SEE ALSO top e2fsck(8), e2mmpstatus(8), mke2fs(8), tune2fs(8). ext4(5) COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 DUMPE2FS(8) Pages that refer to this page: ext4(5), badblocks(8), debugfs(8), e2freefrag(8), e2fsck(8), e2image(8), e2mmpstatus(8), mke2fs(8), tune2fs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# dumpe2fs\n\n> Print the super block and blocks group information for ext2/ext3/ext4 filesystems.\n> Unmount the partition before running this command using `umount {{device}}`.\n> More information: <https://manned.org/dumpe2fs>.\n\n- Display ext2, ext3 and ext4 filesystem information:\n\n`dumpe2fs {{/dev/sdXN}}`\n\n- Display the blocks which are reserved as bad in the filesystem:\n\n`dumpe2fs -b {{/dev/sdXN}}`\n\n- Force display filesystem information even with unrecognizable feature flags:\n\n`dumpe2fs -f {{/dev/sdXN}}`\n\n- Only display the superblock information and not any of the block group descriptor detail information:\n\n`dumpe2fs -h {{/dev/sdXN}}`\n\n- Print the detailed group information block numbers in hexadecimal format:\n\n`dumpe2fs -x {{/dev/sdXN}}`\n
e2freefrag
e2freefrag(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training e2freefrag(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLE | AUTHOR | SEE ALSO | COLOPHON E2FREEFRAG(8) System Manager's Manual E2FREEFRAG(8) NAME top e2freefrag - report free space fragmentation information SYNOPSIS top e2freefrag [ -c chunk_kb ] [ -h ] filesys DESCRIPTION top e2freefrag is used to report free space fragmentation on ext2/3/4 file systems. filesys is the file system device name (e.g. /dev/hdc1, /dev/md0). The e2freefrag program will scan the block bitmap information to check how many free blocks are present as contiguous and aligned free space. The percentage of contiguous free blocks of size and of alignment chunk_kb is reported. It also displays the minimum/maximum/average free chunk size in the file system, along with a histogram of all free chunks. This information can be used to gauge the level of free space fragmentation in the file system. OPTIONS top -c chunk_kb If a chunk size is specified, then e2freefrag will print how many free chunks of size chunk_kb are available in units of kilobytes (Kb). The chunk size must be a power of two and be larger than file system block size. -h Print the usage of the program. EXAMPLE top # e2freefrag /dev/vgroot/lvhome Device: /dev/vgroot/lvhome Blocksize: 4096 bytes Total blocks: 1504085 Free blocks: 292995 (19.5%) Min. free extent: 4 KB Max. free extent: 24008 KB Avg. free extent: 252 KB HISTOGRAM OF FREE EXTENT SIZES: Extent Size Range : Free extents Free Blocks Percent 4K... 8K- : 704 704 0.2% 8K... 16K- : 810 1979 0.7% 16K... 32K- : 843 4467 1.5% 32K... 64K- : 579 6263 2.1% 64K... 128K- : 493 11067 3.8% 128K... 256K- : 394 18097 6.2% 256K... 512K- : 281 25477 8.7% 512K... 1024K- : 253 44914 15.3% 1M... 2M- : 143 51897 17.7% 2M... 4M- : 73 50683 17.3% 4M... 8M- : 37 52417 17.9% 8M... 16M- : 7 19028 6.5% 16M... 32M- : 1 6002 2.0% AUTHOR top This version of e2freefrag was written by Rupesh Thakare, and modified by Andreas Dilger <adilger@sun.com>, and Kalpak Shah. SEE ALSO top debugfs(8), dumpe2fs(8), e2fsck(8) COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 E2FREEFRAG(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# e2freefrag\n\n> Print the free space fragmentation information for ext2/ext3/ext4 filesystems.\n> More information: <https://manned.org/e2freefrag>.\n\n- Check how many free blocks are present as contiguous and aligned free space:\n\n`e2freefrag {{/dev/sdXN}}`\n\n- Specify chunk size in kilobytes to print how many free chunks are available:\n\n`e2freefrag -c {{chunk_size_in_kb}} {{/dev/sdXN}}`\n
e2fsck
e2fsck(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training e2fsck(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT CODE | SIGNALS | REPORTING BUGS | ENVIRONMENT | AUTHOR | SEE ALSO | COLOPHON E2FSCK(8) System Manager's Manual E2FSCK(8) NAME top e2fsck - check a Linux ext2/ext3/ext4 file system SYNOPSIS top e2fsck [ -pacnyrdfkvtDFV ] [ -b superblock ] [ -B blocksize ] [ -l|-L bad_blocks_file ] [ -C fd ] [ -j external-journal ] [ -E extended_options ] [ -z undo_file ] device DESCRIPTION top e2fsck is used to check the ext2/ext3/ext4 family of file systems. For ext3 and ext4 file systems that use a journal, if the system has been shut down uncleanly without any errors, normally, after replaying the committed transactions in the journal, the file system should be marked as clean. Hence, for file systems that use journaling, e2fsck will normally replay the journal and exit, unless its superblock indicates that further checking is required. device is a block device (e.g., /dev/sdc1) or file containing the file system. Note that in general it is not safe to run e2fsck on mounted file systems. The only exception is if the -n option is specified, and -c, -l, or -L options are not specified. However, even if it is safe to do so, the results printed by e2fsck are not valid if the file system is mounted. If e2fsck asks whether or not you should check a file system which is mounted, the only correct answer is ``no''. Only experts who really know what they are doing should consider answering this question in any other way. If e2fsck is run in interactive mode (meaning that none of -y, -n, or -p are specified), the program will ask the user to fix each problem found in the file system. A response of 'y' will fix the error; 'n' will leave the error unfixed; and 'a' will fix the problem and all subsequent problems; pressing Enter will proceed with the default response, which is printed before the question mark. Pressing Control-C terminates e2fsck immediately. OPTIONS top -a This option does the same thing as the -p option. It is provided for backwards compatibility only; it is suggested that people use -p option whenever possible. -b superblock Instead of using the normal superblock, use an alternative superblock specified by superblock. This option is normally used when the primary superblock has been corrupted. The location of backup superblocks is dependent on the file system's blocksize, the number of blocks per group, and features such as sparse_super. Additional backup superblocks can be determined by using the mke2fs program using the -n option to print out where the superblocks exist, supposing mke2fs is supplied with arguments that are consistent with the file system's layout (e.g. blocksize, blocks per group, sparse_super, etc.). If an alternative superblock is specified and the file system is not opened read-only, e2fsck will make sure that the primary superblock is updated appropriately upon completion of the file system check. -B blocksize Normally, e2fsck will search for the superblock at various different block sizes in an attempt to find the appropriate block size. This search can be fooled in some cases. This option forces e2fsck to only try locating the superblock at a particular blocksize. If the superblock is not found, e2fsck will terminate with a fatal error. -c This option causes e2fsck to use badblocks(8) program to do a read-only scan of the device in order to find any bad blocks. If any bad blocks are found, they are added to the bad block inode to prevent them from being allocated to a file or directory. If this option is specified twice, then the bad block scan will be done using a non- destructive read-write test. -C fd This option causes e2fsck to write completion information to the specified file descriptor so that the progress of the file system check can be monitored. This option is typically used by programs which are running e2fsck. If the file descriptor number is negative, then absolute value of the file descriptor will be used, and the progress information will be suppressed initially. It can later be enabled by sending the e2fsck process a SIGUSR1 signal. If the file descriptor specified is 0, e2fsck will print a completion bar as it goes about its business. This requires that e2fsck is running on a video console or terminal. -d Print debugging output (useless unless you are debugging e2fsck). -D Optimize directories in file system. This option causes e2fsck to try to optimize all directories, either by re- indexing them if the file system supports directory indexing, or by sorting and compressing directories for smaller directories, or for file systems using traditional linear directories. Even without the -D option, e2fsck may sometimes optimize a few directories --- for example, if directory indexing is enabled and a directory is not indexed and would benefit from being indexed, or if the index structures are corrupted and need to be rebuilt. The -D option forces all directories in the file system to be optimized. This can sometimes make them a little smaller and slightly faster to search, but in practice, you should rarely need to use this option. The -D option will detect directory entries with duplicate names in a single directory, which e2fsck normally does not enforce for performance reasons. -E extended_options Set e2fsck extended options. Extended options are comma separated, and may take an argument using the equals ('=') sign. The following options are supported: ea_ver=extended_attribute_version Set the version of the extended attribute blocks which e2fsck will require while checking the file system. The version number may be 1 or 2. The default extended attribute version format is 2. journal_only Only replay the journal if required, but do not perform any further checks or repairs. fragcheck During pass 1, print a detailed report of any discontiguous blocks for files in the file system. discard Attempt to discard free blocks and unused inode blocks after the full file system check (discarding blocks is useful on solid state devices and sparse / thin-provisioned storage). Note that discard is done in pass 5 AFTER the file system has been fully checked and only if it does not contain recognizable errors. However there might be cases where e2fsck does not fully recognize a problem and hence in this case this option may prevent you from further manual data recovery. nodiscard Do not attempt to discard free blocks and unused inode blocks. This option is exactly the opposite of discard option. This is set as default. no_optimize_extents Do not offer to optimize the extent tree by eliminating unnecessary width or depth. This can also be enabled in the options section of /etc/e2fsck.conf. optimize_extents Offer to optimize the extent tree by eliminating unnecessary width or depth. This is the default unless otherwise specified in /etc/e2fsck.conf. inode_count_fullmap Trade off using memory for speed when checking a file system with a large number of hard- linked files. The amount of memory required is proportional to the number of inodes in the file system. For large file systems, this can be gigabytes of memory. (For example, a 40TB file system with 2.8 billion inodes will consume an additional 5.7 GB memory if this optimization is enabled.) This optimization can also be enabled in the options section of /etc/e2fsck.conf. no_inode_count_fullmap Disable the inode_count_fullmap optimization. This is the default unless otherwise specified in /etc/e2fsck.conf. readahead_kb Use this many KiB of memory to pre-fetch metadata in the hopes of reducing e2fsck runtime. By default, this is set to the size of two block groups' inode tables (typically 4MiB on a regular ext4 file system); if this amount is more than 1/50th of total physical memory, readahead is disabled. Set this to zero to disable readahead entirely. bmap2extent Convert block-mapped files to extent-mapped files. fixes_only Only fix damaged metadata; do not optimize htree directories or compress extent trees. This option is incompatible with the -D and -E bmap2extent options. check_encoding Force verification of encoded filenames in case-insensitive directories. This is the default mode if the file system has the strict flag enabled. unshare_blocks If the file system has shared blocks, with the shared blocks read-only feature enabled, then this will unshare all shared blocks and unset the read-only feature bit. If there is not enough free space then the operation will fail. If the file system does not have the read-only feature bit, but has shared blocks anyway, then this option will have no effect. Note when using this option, if there is no free space to clone blocks, there is no prompt to delete files and instead the operation will fail. Note that unshare_blocks implies the "-f" option to ensure that all passes are run. Additionally, if "-n" is also specified, e2fsck will simulate trying to allocate enough space to deduplicate. If this fails, the exit code will be non-zero. -f Force checking even if the file system seems clean. -F Flush the file system device's buffer caches before beginning. Only really useful for doing e2fsck time trials. -j external-journal Set the pathname where the external-journal for this file system can be found. -k When combined with the -c option, any existing bad blocks in the bad blocks list are preserved, and any new bad blocks found by running badblocks(8) will be added to the existing bad blocks list. -l filename Add the block numbers listed in the file specified by filename to the list of bad blocks. The format of this file is the same as the one generated by the badblocks(8) program. Note that the block numbers are based on the blocksize of the file system. Hence, badblocks(8) must be given the blocksize of the file system in order to obtain correct results. As a result, it is much simpler and safer to use the -c option to e2fsck, since it will assure that the correct parameters are passed to the badblocks program. -L filename Set the bad blocks list to be the list of blocks specified by filename. (This option is the same as the -l option, except the bad blocks list is cleared before the blocks listed in the file are added to the bad blocks list.) -n Open the file system read-only, and assume an answer of `no' to all questions. Allows e2fsck to be used non- interactively. This option may not be specified at the same time as the -p or -y options. -p Automatically repair ("preen") the file system. This option will cause e2fsck to automatically fix any file system problems that can be safely fixed without human intervention. If e2fsck discovers a problem which may require the system administrator to take additional corrective action, e2fsck will print a description of the problem and then exit with the value 4 logically or'ed into the exit code. (See the EXIT CODE section.) This option is normally used by the system's boot scripts. It may not be specified at the same time as the -n or -y options. -r This option does nothing at all; it is provided only for backwards compatibility. -t Print timing statistics for e2fsck. If this option is used twice, additional timing statistics are printed on a pass by pass basis. -v Verbose mode. -V Print version information and exit. -y Assume an answer of `yes' to all questions; allows e2fsck to be used non-interactively. This option may not be specified at the same time as the -n or -p options. -z undo_file Before overwriting a file system block, write the old contents of the block to an undo file. This undo file can be used with e2undo(8) to restore the old contents of the file system should something go wrong. If the empty string is passed as the undo_file argument, the undo file will be written to a file named e2fsck-device.e2undo in the directory specified via the E2FSPROGS_UNDO_DIR environment variable. WARNING: The undo file cannot be used to recover from a power or system crash. EXIT CODE top The exit code returned by e2fsck is the sum of the following conditions: 0 - No errors 1 - File system errors corrected 2 - File system errors corrected, system should be rebooted 4 - File system errors left uncorrected 8 - Operational error 16 - Usage or syntax error 32 - E2fsck canceled by user request 128 - Shared library error SIGNALS top The following signals have the following effect when sent to e2fsck. SIGUSR1 This signal causes e2fsck to start displaying a completion bar or emitting progress information. (See discussion of the -C option.) SIGUSR2 This signal causes e2fsck to stop displaying a completion bar or emitting progress information. REPORTING BUGS top Almost any piece of software will have bugs. If you manage to find a file system which causes e2fsck to crash, or which e2fsck is unable to repair, please report it to the author. Please include as much information as possible in your bug report. Ideally, include a complete transcript of the e2fsck run, so I can see exactly what error messages are displayed. (Make sure the messages printed by e2fsck are in English; if your system has been configured so that e2fsck's messages have been translated into another language, please set the the LC_ALL environment variable to C so that the transcript of e2fsck's output will be useful to me.) If you have a writable file system where the transcript can be stored, the script(1) program is a handy way to save the output of e2fsck to a file. It is also useful to send the output of dumpe2fs(8). If a specific inode or inodes seems to be giving e2fsck trouble, try running the debugfs(8) command and send the output of the stat(1u) command run on the relevant inode(s). If the inode is a directory, the debugfs dump command will allow you to extract the contents of the directory inode, which can sent to me after being first run through uuencode(1). The most useful data you can send to help reproduce the bug is a compressed raw image dump of the file system, generated using e2image(8). See the e2image(8) man page for more details. Always include the full version string which e2fsck displays when it is run, so I know which version you are running. ENVIRONMENT top E2FSCK_CONFIG Determines the location of the configuration file (see e2fsck.conf(5)). AUTHOR top This version of e2fsck was written by Theodore Ts'o <tytso@mit.edu>. SEE ALSO top e2fsck.conf(5), badblocks(8), dumpe2fs(8), debugfs(8), e2image(8), mke2fs(8), tune2fs(8) COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 E2FSCK(8) Pages that refer to this page: fuse2fs(1), lseek64(3), e2fsck.conf(5), ext4(5), mke2fs.conf(5), badblocks(8), debugfs(8), dumpe2fs(8), e2freefrag(8), e2image(8), e2mmpstatus(8), fsck(8@@e2fsprogs), fsck(8), mke2fs(8), mklost+found(8), quotacheck(8), resize2fs(8), tune2fs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# e2fsck\n\n> Check a Linux ext2/ext3/ext4 filesystem. The partition should be unmounted.\n> More information: <https://manned.org/e2fsck>.\n\n- Check filesystem, reporting any damaged blocks:\n\n`sudo e2fsck {{/dev/sdXN}}`\n\n- Check filesystem and automatically repair any damaged blocks:\n\n`sudo e2fsck -p {{/dev/sdXN}}`\n\n- Check filesystem in read only mode:\n\n`sudo e2fsck -c {{/dev/sdXN}}`\n\n- Perform an exhaustive, non-destructive read-write test for bad blocks and blacklist them:\n\n`sudo e2fsck -fccky {{/dev/sdXN}}`\n
e2image
e2image(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training e2image(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | RAW IMAGE FILES | QCOW2 IMAGE FILES | OFFSETS | AUTHOR | AVAILABILITY | SEE ALSO | COLOPHON E2IMAGE(8) System Manager's Manual E2IMAGE(8) NAME top e2image - Save critical ext2/ext3/ext4 file system metadata to a file SYNOPSIS top e2image [-r|-Q [-af]] [ -b superblock ] [ -B blocksize ] [ -cnps ] [ -o src_offset ] [ -O dest_offset ] device image-file e2image -I device image-file DESCRIPTION top The e2image program will save critical ext2, ext3, or ext4 file system metadata located on device to a file specified by image- file. The image file may be examined by dumpe2fs and debugfs, by using the -i option to those programs. This can assist an expert in recovering catastrophically corrupted file systems. It is a very good idea to create image files for all file systems on a system and save the partition layout (which can be generated using the fdisk -l command) at regular intervals --- at boot time, and/or every week or so. The image file should be stored on some file system other than the file system whose data it contains, to ensure that this data is accessible in the case where the file system has been badly damaged. To save disk space, e2image creates the image file as a sparse file, or in QCOW2 format. Hence, if the sparse image file needs to be copied to another location, it should either be compressed first or copied using the --sparse=always option to the GNU version of cp(1). This does not apply to the QCOW2 image, which is not sparse. The size of an ext2 image file depends primarily on the size of the file systems and how many inodes are in use. For a typical 10 Gigabyte file system, with 200,000 inodes in use out of 1.2 million inodes, the image file will be approximately 35 Megabytes; a 4 Gigabyte file system with 15,000 inodes in use out of 550,000 inodes will result in a 3 Megabyte image file. Image files tend to be quite compressible; an image file taking up 32 Megabytes of space on disk will generally compress down to 3 or 4 Megabytes. If image-file is -, then the output of e2image will be sent to standard output, so that the output can be piped to another program, such as gzip(1). (Note that this is currently only supported when creating a raw image file using the -r option, since the process of creating a normal image file, or QCOW2 image currently requires random access to the file, which cannot be done using a pipe. OPTIONS top -a Include file data in the image file. Normally e2image only includes fs metadata, not regular file data. This option will produce an image that is suitable to use to clone the entire FS or for backup purposes. Note that this option only works with the raw (-r) or QCOW2 (-Q) formats. In conjunction with the -r option it is possible to clone all and only the used blocks of one file system to another device/image file. -b superblock Get image from partition with broken primary superblock by using the superblock located at file system block number superblock. The partition is copied as-is including broken primary superblock. -B blocksize Set the file system blocksize in bytes. Normally, e2image will search for the superblock at various different block sizes in an attempt to find the appropriate blocksize. This search can be fooled in some cases. This option forces e2fsck to only try locating the superblock with a particular blocksize. If the superblock is not found, e2image will terminate with a fatal error. -c Compare each block to be copied from the source device to the corresponding block in the target image-file. If both are already the same, the write will be skipped. This is useful if the file system is being cloned to a flash-based storage device (where reads are very fast and where it is desirable to avoid unnecessary writes to reduce write wear on the device). -f Override the read-only requirement for the source file system when saving the image file using the -r and -Q options. Normally, if the source file system is in use, the resulting image file is very likely not going to be useful. In some cases where the source file system is in constant use this may be better than no image at all. -I install the metadata stored in the image file back to the device. It can be used to restore the file system metadata back to the device in emergency situations. WARNING!!!! The -I option should only be used as a desperation measure when other alternatives have failed. If the file system has changed since the image file was created, data will be lost. In general, you should make another full image backup of the file system first, in case you wish to try other recovery strategies afterward. -n Cause all image writes to be skipped, and instead only print the block numbers that would have been written. -o src_offset Specify offset of the image to be read from the start of the source device in bytes. See OFFSETS for more details. -O tgt_offset Specify offset of the image to be written from the start of the target image-file in bytes. See OFFSETS for more details. -p Show progress of image-file creation. -Q Create a QCOW2-format image file instead of a normal image file, suitable for use by virtual machine images, and other tools that can use the .qcow image format. See QCOW2 IMAGE FILES below for details. -r Create a raw image file instead of a normal image file. See RAW IMAGE FILES below for details. -s Scramble directory entries and zero out unused portions of the directory blocks in the written image file to avoid revealing information about the contents of the file system. However, this will prevent analysis of problems related to hash-tree indexed directories. RAW IMAGE FILES top The -r option will create a raw image file, which differs from a normal image file in two ways. First, the file system metadata is placed in the same relative offset within image-file as it is in the device so that debugfs(8), dumpe2fs(8), e2fsck(8), losetup(8), etc. and can be run directly on the raw image file. In order to minimize the amount of disk space consumed by the raw image file, it is created as a sparse file. (Beware of copying or compressing/decompressing this file with utilities that don't understand how to create sparse files; the file will become as large as the file system itself!) Secondly, the raw image file also includes indirect blocks and directory blocks, which the standard image file does not have. Raw image files are sometimes used when sending file systems to the maintainer as part of bug reports to e2fsprogs. When used in this capacity, the recommended command is as follows (replace hda1 with the appropriate device for your system): e2image -r /dev/hda1 - | bzip2 > hda1.e2i.bz2 This will only send the metadata information, without any data blocks. However, the filenames in the directory blocks can still reveal information about the contents of the file system that the bug reporter may wish to keep confidential. To address this concern, the -s option can be specified to scramble the filenames in the image. Note that this will work even if you substitute /dev/hda1 for another raw disk image, or QCOW2 image previously created by e2image. QCOW2 IMAGE FILES top The -Q option will create a QCOW2 image file instead of a normal, or raw image file. A QCOW2 image contains all the information the raw image does, however unlike the raw image it is not sparse. The QCOW2 image minimize the amount of space used by the image by storing it in special format which packs data closely together, hence avoiding holes while still minimizing size. In order to send file system to the maintainer as a part of bug report to e2fsprogs, use following commands (replace hda1 with the appropriate device for your system): e2image -Q /dev/hda1 hda1.qcow2 bzip2 -z hda1.qcow2 This will only send the metadata information, without any data blocks. As described for RAW IMAGE FILES the -s option can be specified to scramble the file system names in the image. Note that the QCOW2 image created by e2image is a regular QCOW2 image and can be processed by tools aware of QCOW2 format such as for example qemu-img. You can convert a .qcow2 image into a raw image with: e2image -r hda1.qcow2 hda1.raw This can be useful to write a QCOW2 image containing all data to a sparse image file where it can be loop mounted, or to a disk partition. Note that this may not work with QCOW2 images not generated by e2image. OFFSETS top Normally a file system starts at the beginning of a partition, and e2image is run on the partition. When working with image files, you don't have the option of using the partition device, so you can specify the offset where the file system starts directly with the -o option. Similarly the -O option specifies the offset that should be seeked to in the destination before writing the file system. For example, if you have a dd image of a whole hard drive that contains an ext2 fs in a partition starting at 1 MiB, you can clone that image to a block device with: e2image -aro 1048576 img /dev/sda1 Or you can clone a file system from a block device into an image file, leaving room in the first MiB for a partition table with: e2image -arO 1048576 /dev/sda1 img If you specify at least one offset, and only one file, an in- place move will be performed, allowing you to safely move the file system from one offset to another. AUTHOR top e2image was written by Theodore Ts'o (tytso@mit.edu). AVAILABILITY top e2image is part of the e2fsprogs package and is available from http://e2fsprogs.sourceforge.net. SEE ALSO top dumpe2fs(8), debugfs(8) e2fsck(8) COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 E2IMAGE(8) Pages that refer to this page: e2fsck(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# e2image\n\n> Save critical ext2/ext3/ext4 filesystem metadata to a file.\n> More information: <https://manned.org/e2image>.\n\n- Write metadata located on device to a specific file:\n\n`e2image {{/dev/sdXN}} {{path/to/image_file}}`\n\n- Print metadata located on device to `stdout`:\n\n`e2image {{/dev/sdXN}} -`\n\n- Restore the filesystem metadata back to the device:\n\n`e2image -I {{/dev/sdXN}} {{path/to/image_file}}`\n\n- Create a large raw sparse file with metadata at proper offsets:\n\n`e2image -r {{/dev/sdXN}} {{path/to/image_file}}`\n\n- Create a QCOW2 image file instead of a normal or raw image file:\n\n`e2image -Q {{/dev/sdXN}} {{path/to/image_file}}`\n
e2label
e2label(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training e2label(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | AVAILABILITY | SEE ALSO | COLOPHON E2LABEL(8) System Manager's Manual E2LABEL(8) NAME top e2label - Change the label on an ext2/ext3/ext4 file system SYNOPSIS top e2label device [ volume-label ] DESCRIPTION top e2label will display or change the volume label on the ext2, ext3, or ext4 file system located on device. If the optional argument volume-label is not present, e2label will simply display the current volume label. If the optional argument volume-label is present, then e2label will set the volume label to be volume-label. Ext2 volume labels can be at most 16 characters long; if volume-label is longer than 16 characters, e2label will truncate it and print a warning message. For other file systems that support online label manipulation and are mounted e2label will work as well, but it will not attempt to truncate the volume-label at all. It is also possible to set the volume label using the -L option of tune2fs(8). AUTHOR top e2label was written by Theodore Ts'o (tytso@mit.edu). AVAILABILITY top e2label is part of the e2fsprogs package and is available from http://e2fsprogs.sourceforge.net. SEE ALSO top mke2fs(8), tune2fs(8) COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 E2LABEL(8) Pages that refer to this page: fstab(5), mount(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# e2label\n\n> Change the label on an ext2/ext3/ext4 filesystem.\n> More information: <https://manned.org/e2label>.\n\n- Change the volume label on a specific ext partition:\n\n`e2label {{/dev/sda1}} "{{label_name}}"`\n
e2undo
e2undo(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training e2undo(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | AUTHOR | AVAILABILITY | SEE ALSO | COLOPHON E2UNDO(8) System Manager's Manual E2UNDO(8) NAME top e2undo - Replay an undo log for an ext2/ext3/ext4 file system SYNOPSIS top e2undo [ -f ] [ -h ] [ -n ] [ -o offset ] [ -v ] [ -z undo_file ] undo_log device DESCRIPTION top e2undo will replay the undo log undo_log for an ext2/ext3/ext4 file system found on device. This can be used to undo a failed operation by an e2fsprogs program. OPTIONS top -f Normally, e2undo will check the file system superblock to make sure the undo log matches with the file system on the device. If they do not match, e2undo will refuse to apply the undo log as a safety mechanism. The -f option disables this safety mechanism. -h Display a usage message. -n Dry-run; do not actually write blocks back to the file system. -o offset Specify the file system's offset (in bytes) from the beginning of the device or file. -v Report which block we're currently replaying. -z undo_file Before overwriting a file system block, write the old contents of the block to an undo file. This undo file can be used with e2undo(8) to restore the old contents of the file system should something go wrong. If the empty string is passed as the undo_file argument, the undo file will be written to a file named e2undo-device.e2undo in the directory specified via the E2FSPROGS_UNDO_DIR environment variable. WARNING: The undo file cannot be used to recover from a power or system crash. AUTHOR top e2undo was written by Aneesh Kumar K.V. (aneesh.kumar@linux.vnet.ibm.com) AVAILABILITY top e2undo is part of the e2fsprogs package and is available from http://e2fsprogs.sourceforge.net. SEE ALSO top mke2fs(8), tune2fs(8) COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 E2UNDO(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# e2undo\n\n> Replay undo logs for an ext2/ext3/ext4 filesystem.\n> This can be used to undo a failed operation by an e2fsprogs program.\n> More information: <https://man7.org/linux/man-pages/man8/e2undo.8.html>.\n\n- Display information about a specific undo file:\n\n`e2undo -h {{path/to/undo_file}} {{/dev/sdXN}}`\n\n- Perform a dry-run and display the candidate blocks for replaying:\n\n`e2undo -nv {{path/to/undo_file}} {{/dev/sdXN}}`\n\n- Perform an undo operation:\n\n`e2undo {{path/to/undo_file}} {{/dev/sdXN}}`\n\n- Perform an undo operation and display verbose information:\n\n`e2undo -v {{path/to/undo_file}} {{/dev/sdXN}}`\n\n- Write the old contents of the block to an undo file before overwriting a file system block:\n\n`e2undo -z {{path/to/file.e2undo}} {{path/to/undo_file}} {{/dev/sdXN}}`\n
e4defrag
e4defrag(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training e4defrag(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | NOTES | AUTHOR | SEE ALSO | COLOPHON E4DEFRAG(8) System Manager's Manual E4DEFRAG(8) NAME top e4defrag - online defragmenter for ext4 file system SYNOPSIS top e4defrag [ -c ] [ -v ] target ... DESCRIPTION top e4defrag reduces fragmentation of extent based file. The file targeted by e4defrag is created on ext4 file system made with "-O extent" option (see mke2fs(8)). The targeted file gets more contiguous blocks and improves the file access speed. target is a regular file, a directory, or a device that is mounted as ext4 file system. If target is a directory, e4defrag reduces fragmentation of all files in it. If target is a device, e4defrag gets the mount point of it and reduces fragmentation of all files in this mount point. OPTIONS top -c Get a current fragmentation count and an ideal fragmentation count, and calculate fragmentation score based on them. By seeing this score, we can determine whether we should execute e4defrag to target. When used with -v option, the current fragmentation count and the ideal fragmentation count are printed for each file. Also this option outputs the average data size in one extent. If you see it, you'll find the file has ideal extents or not. Note that the maximum extent size is 131072KB in ext4 file system (if block size is 4KB). If this option is specified, target is never defragmented. -v Print error messages and the fragmentation count before and after defrag for each file. NOTES top e4defrag does not support swap file, files in lost+found directory, and files allocated in indirect blocks. When target is a device or a mount point, e4defrag doesn't defragment files in mount point of other device. It is safe to run e4defrag on a file while it is actively in use by another application. Since the contents of file blocks are copied using the page cache, this can result in a performance slowdown to both e4defrag and the application due to contention over the system's memory and disk bandwidth. If the file system's free space is fragmented, or if there is insufficient free space available, e4defrag may not be able to improve the file's fragmentation. Non-privileged users can execute e4defrag to their own file, but the score is not printed if -c option is specified. Therefore, it is desirable to be executed by root user. AUTHOR top Written by Akira Fujita <a-fujita@rs.jp.nec.com> and Takashi Sato <t-sato@yk.jp.nec.com>. SEE ALSO top mke2fs(8), mount(8). COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org e4defrag version 2.0 May 2009 E4DEFRAG(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# e4defrag\n\n> Defragment an ext4 filesystem.\n> More information: <https://manned.org/e4defrag>.\n\n- Defragment the filesystem:\n\n`e4defrag {{/dev/sdXN}}`\n\n- See how fragmented a filesystem is:\n\n`e4defrag -c {{/dev/sdXN}}`\n\n- Print errors and the fragmentation count before and after each file:\n\n`e4defrag -v {{/dev/sdXN}}`\n
echo
echo(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training echo(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON ECHO(1) User Commands ECHO(1) NAME top echo - display a line of text SYNOPSIS top echo [SHORT-OPTION]... [STRING]... echo LONG-OPTION DESCRIPTION top Echo the STRING(s) to standard output. -n do not output the trailing newline -e enable interpretation of backslash escapes -E disable interpretation of backslash escapes (default) --help display this help and exit --version output version information and exit If -e is in effect, the following sequences are recognized: \\ backslash \a alert (BEL) \b backspace \c produce no further output \e escape \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab \0NNN byte with octal value NNN (1 to 3 digits) \xHH byte with hexadecimal value HH (1 to 2 digits) NOTE: your shell may have its own version of echo, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports. NOTE: printf(1) is a preferred alternative, which does not have issues outputting option-like strings. AUTHOR top Written by Brian Fox and Chet Ramey. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top printf(1) Full documentation <https://www.gnu.org/software/coreutils/echo> or available locally via: info '(coreutils) echo invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 ECHO(1) Pages that refer to this page: ldapcompare(1), systemd-ask-password(1), systemd-run(1), cpuset(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# echo\n\n> Print given arguments.\n> More information: <https://www.gnu.org/software/coreutils/echo>.\n\n- Print a text message. Note: quotes are optional:\n\n`echo "{{Hello World}}"`\n\n- Print a message with environment variables:\n\n`echo "{{My path is $PATH}}"`\n\n- Print a message without the trailing newline:\n\n`echo -n "{{Hello World}}"`\n\n- Append a message to the file:\n\n`echo "{{Hello World}}" >> {{file.txt}}`\n\n- Enable interpretation of backslash escapes (special characters):\n\n`echo -e "{{Column 1\tColumn 2}}"`\n\n- Print the exit status of the last executed command (Note: In Windows Command Prompt and PowerShell the equivalent commands are `echo %errorlevel%` and `$lastexitcode` respectively):\n\n`echo $?`\n
ed
ed(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ed(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT ED(1P) POSIX Programmer's Manual ED(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top ed edit text SYNOPSIS top ed [-p string] [-s] [file] DESCRIPTION top The ed utility is a line-oriented text editor that uses two modes: command mode and input mode. In command mode the input characters shall be interpreted as commands, and in input mode they shall be interpreted as text. See the EXTENDED DESCRIPTION section. If an operand is '-', the results are unspecified. OPTIONS top The ed utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines, except for the unspecified usage of '-'. The following options shall be supported: -p string Use string as the prompt string when in command mode. By default, there shall be no prompt string. -s Suppress the writing of byte counts by e, E, r, and w commands and of the '!' prompt after a !command. OPERANDS top The following operand shall be supported: file If the file argument is given, ed shall simulate an e command on the file named by the pathname, file, before accepting commands from the standard input. STDIN top The standard input shall be a text file consisting of commands, as described in the EXTENDED DESCRIPTION section. INPUT FILES top The input files shall be text files. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of ed: HOME Determine the pathname of the user's home directory. LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_COLLATE Determine the locale for the behavior of ranges, equivalence classes, and multi-character collating elements within regular expressions. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files) and the behavior of character classes within regular expressions. LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error and informative messages written to standard output. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top The ed utility shall take the standard action for all signals (see the ASYNCHRONOUS EVENTS section in Section 1.4, Utility Description Defaults) with the following exceptions: SIGINT The ed utility shall interrupt its current activity, write the string "?\n" to standard output, and return to command mode (see the EXTENDED DESCRIPTION section). SIGHUP If the buffer is not empty and has changed since the last write, the ed utility shall attempt to write a copy of the buffer in a file. First, the file named ed.hup in the current directory shall be used; if that fails, the file named ed.hup in the directory named by the HOME environment variable shall be used. In any case, the ed utility shall exit without writing the file to the currently remembered pathname and without returning to command mode. SIGQUIT The ed utility shall ignore this event. STDOUT top Various editing commands and the prompting feature (see -p) write to standard output, as described in the EXTENDED DESCRIPTION section. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top The output files shall be text files whose formats are dependent on the editing commands given. EXTENDED DESCRIPTION top The ed utility shall operate on a copy of the file it is editing; changes made to the copy shall have no effect on the file until a w (write) command is given. The copy of the text is called the buffer. Commands to ed have a simple and regular structure: zero, one, or two addresses followed by a single-character command, possibly followed by parameters to that command. These addresses specify one or more lines in the buffer. Every command that requires addresses has default addresses, so that the addresses very often can be omitted. If the -p option is specified, the prompt string shall be written to standard output before each command is read. In general, only one command can appear on a line. Certain commands allow text to be input. This text is placed in the appropriate place in the buffer. While ed is accepting text, it is said to be in input mode. In this mode, no commands shall be recognized; all input is merely collected. Input mode is terminated by entering a line consisting of two characters: a <period> ('.') followed by a <newline>. This line is not considered part of the input text. Regular Expressions in ed The ed utility shall support basic regular expressions, as described in the Base Definitions volume of POSIX.12017, Section 9.3, Basic Regular Expressions. Since regular expressions in ed are always matched against single lines (excluding the terminating <newline> characters), never against any larger section of text, there is no way for a regular expression to match a <newline>. A null RE shall be equivalent to the last RE encountered. Regular expressions are used in addresses to specify lines, and in some commands (for example, the s substitute command) to specify portions of a line to be substituted. Addresses in ed Addressing in ed relates to the current line. Generally, the current line is the last line affected by a command. The current line number is the address of the current line. If the edit buffer is not empty, the initial value for the current line shall be the last line in the edit buffer; otherwise, zero. Addresses shall be constructed as follows: 1. The <period> character ('.') shall address the current line. 2. The <dollar-sign> character ('$') shall address the last line of the edit buffer. 3. The positive decimal number n shall address the nth line of the edit buffer. 4. The <apostrophe>-x character pair ("'x") shall address the line marked with the mark name character x, which shall be a lowercase letter from the portable character set. It shall be an error if the character has not been set to mark a line or if the line that was marked is not currently present in the edit buffer. 5. A BRE enclosed by <slash> characters ('/') shall address the first line found by searching forwards from the line following the current line toward the end of the edit buffer and stopping at the first line for which the line excluding the terminating <newline> matches the BRE. The BRE consisting of a null BRE delimited by a pair of <slash> characters shall address the next line for which the line excluding the terminating <newline> matches the last BRE encountered. In addition, the second <slash> can be omitted at the end of a command line. Within the BRE, a <backslash>-<slash> pair ("\/") shall represent a literal <slash> instead of the BRE delimiter. If necessary, the search shall wrap around to the beginning of the buffer and continue up to and including the current line, so that the entire buffer is searched. 6. A BRE enclosed by <question-mark> characters ('?') shall address the first line found by searching backwards from the line preceding the current line toward the beginning of the edit buffer and stopping at the first line for which the line excluding the terminating <newline> matches the BRE. The BRE consisting of a null BRE delimited by a pair of <question- mark> characters ("??") shall address the previous line for which the line excluding the terminating <newline> matches the last BRE encountered. In addition, the second <question- mark> can be omitted at the end of a command line. Within the BRE, a <backslash>-<question-mark> pair ("\?") shall represent a literal <question-mark> instead of the BRE delimiter. If necessary, the search shall wrap around to the end of the buffer and continue up to and including the current line, so that the entire buffer is searched. 7. A <plus-sign> ('+') or <hyphen-minus> character ('-') followed by a decimal number shall address the current line plus or minus the number. A <plus-sign> or <hyphen-minus> character not followed by a decimal number shall address the current line plus or minus 1. Addresses can be followed by zero or more address offsets, optionally <blank>-separated. Address offsets are constructed as follows: * A <plus-sign> or <hyphen-minus> character followed by a decimal number shall add or subtract, respectively, the indicated number of lines to or from the address. A <plus- sign> or <hyphen-minus> character not followed by a decimal number shall add or subtract 1 to or from the address. * A decimal number shall add the indicated number of lines to the address. It shall not be an error for an intermediate address value to be less than zero or greater than the last line in the edit buffer. It shall be an error for the final address value to be less than zero or greater than the last line in the edit buffer. It shall be an error if a search for a BRE fails to find a matching line. Commands accept zero, one, or two addresses. If more than the required number of addresses are provided to a command that requires zero addresses, it shall be an error. Otherwise, if more than the required number of addresses are provided to a command, the addresses specified first shall be evaluated and then discarded until the maximum number of valid addresses remain, for the specified command. Addresses shall be separated from each other by a <comma> (',') or <semicolon> character (';'). In the case of a <semicolon> separator, the current line ('.') shall be set to the first address, and only then will the second address be calculated. This feature can be used to determine the starting line for forwards and backwards searches; see rules 5. and 6. Addresses can be omitted on either side of the <comma> or <semicolon> separator, in which case the resulting address pairs shall be as follows: Specified Resulting , 1 , $ , addr 1 , addr addr , addr , addr ; . ; $ ; addr . ; addr addr ; addr ; addr Any <blank> characters included between addresses, address separators, or address offsets shall be ignored. Commands in ed In the following list of ed commands, the default addresses are shown in parentheses. The number of addresses shown in the default shall be the number expected by the command. The parentheses are not part of the address; they show that the given addresses are the default. It is generally invalid for more than one command to appear on a line. However, any command (except e, E, f, q, Q, r, w, and !) can be suffixed by the letter l, n, or p; in which case, except for the l, n, and p commands, the command shall be executed and then the new current line shall be written as described below under the l, n, and p commands. When an l, n, or p suffix is used with an l, n, or p command, the command shall write to standard output as described below, but it is unspecified whether the suffix writes the current line again in the requested format or whether the suffix has no effect. For example, the pl command (base p command with an l suffix) shall either write just the current line or write it twiceonce as specified for p and once as specified for l. Also, the g, G, v, and V commands shall take a command as a parameter. Each address component can be preceded by zero or more <blank> characters. The command letter can be preceded by zero or more <blank> characters. If a suffix letter (l, n, or p) is given, the application shall ensure that it immediately follows the command. The e, E, f, r, and w commands shall take an optional file parameter, separated from the command letter by one or more <blank> characters. If changes have been made in the buffer since the last w command that wrote the entire buffer, ed shall warn the user if an attempt is made to destroy the editor buffer via the e or q commands. The ed utility shall write the string: "?\n" (followed by an explanatory message if help mode has been enabled via the H command) to standard output and shall continue in command mode with the current line number unchanged. If the e or q command is repeated with no intervening command, it shall take effect. If a terminal disconnect (see the Base Definitions volume of POSIX.12017, Chapter 11, General Terminal Interface, Modem Disconnect and Closing a Device Terminal), is detected: * If accompanied by a SIGHUP signal, the ed utility shall operate as described in the ASYNCHRONOUS EVENTS section for a SIGHUP signal. * If not accompanied by a SIGHUP signal, the ed utility shall act as if an end-of-file had been detected on standard input. If an end-of-file is detected on standard input: * If the ed utility is in input mode, ed shall terminate input mode and return to command mode. It is unspecified if any partially entered lines (that is, input text without a terminating <newline>) are discarded from the input text. * If the ed utility is in command mode, it shall act as if a q command had been entered. If the closing delimiter of an RE or of a replacement string (for example, '/') in a g, G, s, v, or V command would be the last character before a <newline>, that delimiter can be omitted, in which case the addressed line shall be written. For example, the following pairs of commands are equivalent: s/s1/s2 s/s1/s2/p g/s1 g/s1/p ?s1 ?s1? If an invalid command is entered, ed shall write the string: "?\n" (followed by an explanatory message if help mode has been enabled via the H command) to standard output and shall continue in command mode with the current line number unchanged. Append Command Synopsis: (.)a <text> . The a command shall read the given text and append it after the addressed line; the current line number shall become the address of the last inserted line or, if there were none, the addressed line. Address 0 shall be valid for this command; it shall cause the appended text to be placed at the beginning of the buffer. Change Command Synopsis: (.,.)c <text> . The c command shall delete the addressed lines, then accept input text that replaces these lines; the current line shall be set to the address of the last line input; or, if there were none, at the line after the last line deleted; if the lines deleted were originally at the end of the buffer, the current line number shall be set to the address of the new last line; if no lines remain in the buffer, the current line number shall be set to zero. Address 0 shall be valid for this command; it shall be interpreted as if address 1 were specified. Delete Command Synopsis: (.,.)d The d command shall delete the addressed lines from the buffer. The address of the line after the last line deleted shall become the current line number; if the lines deleted were originally at the end of the buffer, the current line number shall be set to the address of the new last line; if no lines remain in the buffer, the current line number shall be set to zero. Edit Command Synopsis: e [file] The e command shall delete the entire contents of the buffer and then read in the file named by the pathname file. The current line number shall be set to the address of the last line of the buffer. If no pathname is given, the currently remembered pathname, if any, shall be used (see the f command). The number of bytes read shall be written to standard output, unless the -s option was specified, in the following format: "%d\n", <number of bytes read> The name file shall be remembered for possible use as a default pathname in subsequent e, E, r, and w commands. If file is replaced by '!', the rest of the line shall be taken to be a shell command line whose output is to be read. Such a shell command line shall not be remembered as the current file. All marks shall be discarded upon the completion of a successful e command. If the buffer has changed since the last time the entire buffer was written, the user shall be warned, as described previously. Edit Without Checking Command Synopsis: E [file] The E command shall possess all properties and restrictions of the e command except that the editor shall not check to see whether any changes have been made to the buffer since the last w command. Filename Command Synopsis: f [file] If file is given, the f command shall change the currently remembered pathname to file; whether the name is changed or not, it shall then write the (possibly new) currently remembered pathname to the standard output in the following format: "%s\n", <pathname> The current line number shall be unchanged. Global Command Synopsis: (1,$)g/RE/command list In the g command, the first step shall be to mark every line for which the line excluding the terminating <newline> matches the given RE. Then, going sequentially from the beginning of the file to the end of the file, the given command list shall be executed for each marked line, with the current line number set to the address of that line. Any line modified by the command list shall be unmarked. When the g command completes, the current line number shall have the value assigned by the last command in the command list. If there were no matching lines, the current line number shall not be changed. A single command or the first of a list of commands shall appear on the same line as the global command. All lines of a multi-line list except the last line shall be ended with a <backslash> preceding the terminating <newline>; the a, i, and c commands and associated input are permitted. The '.' terminating input mode can be omitted if it would be the last line of the command list. An empty command list shall be equivalent to the p command. The use of the g, G, v, V, and ! commands in the command list produces undefined results. Any character other than <space> or <newline> can be used instead of a <slash> to delimit the RE. Within the RE, the RE delimiter itself can be used as a literal character if it is preceded by a <backslash>. Interactive Global Command Synopsis: (1,$)G/RE/ In the G command, the first step shall be to mark every line for which the line excluding the terminating <newline> matches the given RE. Then, for every such line, that line shall be written, the current line number shall be set to the address of that line, and any one command (other than one of the a, c, i, g, G, v, and V commands) shall be read and executed. A <newline> shall act as a null command (causing no action to be taken on the current line); an '&' shall cause the re-execution of the most recent non-null command executed within the current invocation of G. Note that the commands input as part of the execution of the G command can address and affect any lines in the buffer. Any line modified by the command shall be unmarked. The final value of the current line number shall be the value set by the last command successfully executed. (Note that the last command successfully executed shall be the G command itself if a command fails or the null command is specified.) If there were no matching lines, the current line number shall not be changed. The G command can be terminated by a SIGINT signal. Any character other than <space> or <newline> can be used instead of a <slash> to delimit the RE and the replacement. Within the RE, the RE delimiter itself can be used as a literal character if it is preceded by a <backslash>. Help Command Synopsis: h The h command shall write a short message to standard output that explains the reason for the most recent '?' notification. The current line number shall be unchanged. Help-Mode Command Synopsis: H The H command shall cause ed to enter a mode in which help messages (see the h command) shall be written to standard output for all subsequent '?' notifications. The H command alternately shall turn this mode on and off; it is initially off. If the help-mode is being turned on, the H command also explains the previous '?' notification, if there was one. The current line number shall be unchanged. Insert Command Synopsis: (.)i <text> . The i command shall insert the given text before the addressed line; the current line is set to the last inserted line or, if there was none, to the addressed line. This command differs from the a command only in the placement of the input text. Address 0 shall be valid for this command; it shall be interpreted as if address 1 were specified. Join Command Synopsis: (.,.+1)j The j command shall join contiguous lines by removing the appropriate <newline> characters. If exactly one address is given, this command shall do nothing. If lines are joined, the current line number shall be set to the address of the joined line; otherwise, the current line number shall be unchanged. Mark Command Synopsis: (.)kx The k command shall mark the addressed line with name x, which the application shall ensure is a lowercase letter from the portable character set. The address "'x" shall then refer to this line; the current line number shall be unchanged. List Command Synopsis: (.,.)l The l command shall write to standard output the addressed lines in a visually unambiguous form. The characters listed in the Base Definitions volume of POSIX.12017, Table 5-1, Escape Sequences and Associated Actions ('\\', '\a', '\b', '\f', '\r', '\t', '\v') shall be written as the corresponding escape sequence; the '\n' in that table is not applicable. Non-printable characters not in the table shall be written as one three-digit octal number (with a preceding <backslash> character) for each byte in the character (most significant byte first). Long lines shall be folded, with the point of folding indicated by <newline> preceded by a <backslash>; the length at which folding occurs is unspecified, but should be appropriate for the output device. The end of each line shall be marked with a '$', and '$' characters within the text shall be written with a preceding <backslash>. An l command can be appended to any other command other than e, E, f, q, Q, r, w, or !. The current line number shall be set to the address of the last line written. Move Command Synopsis: (.,.)maddress The m command shall reposition the addressed lines after the line addressed by address. Address 0 shall be valid for address and cause the addressed lines to be moved to the beginning of the buffer. It shall be an error if address address falls within the range of moved lines. The current line number shall be set to the address of the last line moved. Number Command Synopsis: (.,.)n The n command shall write to standard output the addressed lines, preceding each line by its line number and a <tab>; the current line number shall be set to the address of the last line written. The n command can be appended to any command other than e, E, f, q, Q, r, w, or !. Print Command Synopsis: (.,.)p The p command shall write to standard output the addressed lines; the current line number shall be set to the address of the last line written. The p command can be appended to any command other than e, E, f, q, Q, r, w, or !. Prompt Command Synopsis: P The P command shall cause ed to prompt with an <asterisk> ('*') (or string, if -p is specified) for all subsequent commands. The P command alternatively shall turn this mode on and off; it shall be initially on if the -p option is specified; otherwise, off. The current line number shall be unchanged. Quit Command Synopsis: q The q command shall cause ed to exit. If the buffer has changed since the last time the entire buffer was written, the user shall be warned, as described previously. Quit Without Checking Command Synopsis: Q The Q command shall cause ed to exit without checking whether changes have been made in the buffer since the last w command. Read Command Synopsis: ($)r [file] The r command shall read in the file named by the pathname file and append it after the addressed line. If no file argument is given, the currently remembered pathname, if any, shall be used (see the e and f commands). The currently remembered pathname shall not be changed unless there is no remembered pathname. Address 0 shall be valid for r and shall cause the file to be read at the beginning of the buffer. If the read is successful, and -s was not specified, the number of bytes read shall be written to standard output in the following format: "%d\n", <number of bytes read> The current line number shall be set to the address of the last line read in. If file is replaced by '!', the rest of the line shall be taken to be a shell command line whose output is to be read. Such a shell command line shall not be remembered as the current pathname. Substitute Command Synopsis: (.,.)s/RE/replacement/flags The s command shall search each addressed line for an occurrence of the specified RE and replace either the first or all (non- overlapped) matched strings with the replacement; see the following description of the g suffix. It is an error if the substitution fails on every addressed line. Any character other than <space> or <newline> can be used instead of a <slash> to delimit the RE and the replacement. Within the RE, the RE delimiter itself can be used as a literal character if it is preceded by a <backslash>. The current line shall be set to the address of the last line on which a substitution occurred. An <ampersand> ('&') appearing in the replacement shall be replaced by the string matching the RE on the current line. The special meaning of '&' in this context can be suppressed by preceding it by <backslash>. As a more general feature, the characters '\n', where n is a digit, shall be replaced by the text matched by the corresponding back-reference expression. If the corresponding back-reference expression does not match, then the characters '\n' shall be replaced by the empty string. When the character '%' is the only character in the replacement, the replacement used in the most recent substitute command shall be used as the replacement in the current substitute command; if there was no previous substitute command, the use of '%' in this manner shall be an error. The '%' shall lose its special meaning when it is in a replacement string of more than one character or is preceded by a <backslash>. For each <backslash> encountered in scanning replacement from beginning to end, the following character shall lose its special meaning (if any). It is unspecified what special meaning is given to any character other than <backslash>, '&', '%', or digits. A line can be split by substituting a <newline> into it. The application shall ensure it escapes the <newline> in the replacement by preceding it by <backslash>. Such substitution cannot be done as part of a g or v command list. The current line number shall be set to the address of the last line on which a substitution is performed. If no substitution is performed, the current line number shall be unchanged. If a line is split, a substitution shall be considered to have been performed on each of the new lines for the purpose of determining the new current line number. A substitution shall be considered to have been performed even if the replacement string is identical to the string that it replaces. The application shall ensure that the value of flags is zero or more of: count Substitute for the countth occurrence only of the RE found on each addressed line. g Globally substitute for all non-overlapping instances of the RE rather than just the first one. If both g and count are specified, the results are unspecified. l Write to standard output the final line in which a substitution was made. The line shall be written in the format specified for the l command. n Write to standard output the final line in which a substitution was made. The line shall be written in the format specified for the n command. p Write to standard output the final line in which a substitution was made. The line shall be written in the format specified for the p command. Copy Command Synopsis: (.,.)taddress The t command shall be equivalent to the m command, except that a copy of the addressed lines shall be placed after address address (which can be 0); the current line number shall be set to the address of the last line added. Undo Command Synopsis: u The u command shall nullify the effect of the most recent command that modified anything in the buffer, namely the most recent a, c, d, g, i, j, m, r, s, t, u, v, G, or V command. All changes made to the buffer by a g, G, v, or V global command shall be undone as a single change; if no changes were made by the global command (such as with g/RE/p), the u command shall have no effect. The current line number shall be set to the value it had immediately before the command being undone started. Global Non-Matched Command Synopsis: (1,$)v/RE/command list This command shall be equivalent to the global command g except that the lines that are marked during the first step shall be those for which the line excluding the terminating <newline> does not match the RE. Interactive Global Not-Matched Command Synopsis: (1,$)V/RE/ This command shall be equivalent to the interactive global command G except that the lines that are marked during the first step shall be those for which the line excluding the terminating <newline> does not match the RE. Write Command Synopsis: (1,$)w [file] The w command shall write the addressed lines into the file named by the pathname file. The command shall create the file, if it does not exist, or shall replace the contents of the existing file. The currently remembered pathname shall not be changed unless there is no remembered pathname. If no pathname is given, the currently remembered pathname, if any, shall be used (see the e and f commands); the current line number shall be unchanged. If the command is successful, the number of bytes written shall be written to standard output, unless the -s option was specified, in the following format: "%d\n", <number of bytes written> If file begins with '!', the rest of the line shall be taken to be a shell command line whose standard input shall be the addressed lines. Such a shell command line shall not be remembered as the current pathname. This usage of the write command with '!' shall not be considered as a ``last w command that wrote the entire buffer'', as described previously; thus, this alone shall not prevent the warning to the user if an attempt is made to destroy the editor buffer via the e or q commands. Line Number Command Synopsis: ($)= The line number of the addressed line shall be written to standard output in the following format: "%d\n", <line number> The current line number shall be unchanged by this command. Shell Escape Command Synopsis: !command The remainder of the line after the '!' shall be sent to the command interpreter to be interpreted as a shell command line. Within the text of that shell command line, the unescaped character '%' shall be replaced with the remembered pathname; if a '!' appears as the first character of the command, it shall be replaced with the text of the previous shell command executed via '!'. Thus, "!!" shall repeat the previous !command. If any replacements of '%' or '!' are performed, the modified line shall be written to the standard output before command is executed. The ! command shall write: "!\n" to standard output upon completion, unless the -s option is specified. The current line number shall be unchanged. Null Command Synopsis: (.+1) An address alone on a line shall cause the addressed line to be written. A <newline> alone shall be equivalent to "+1p". The current line number shall be set to the address of the written line. EXIT STATUS top The following exit values shall be returned: 0 Successful completion without any file or command errors. >0 An error occurred. CONSEQUENCES OF ERRORS top When an error in the input script is encountered, or when an error is detected that is a consequence of the data (not) present in the file or due to an external condition such as a read or write error: * If the standard input is a terminal device file, all input shall be flushed, and a new command read. * If the standard input is a regular file, ed shall terminate with a non-zero exit status. The following sections are informative. APPLICATION USAGE top Because of the extremely terse nature of the default error messages, the prudent script writer begins the ed input commands with an H command, so that if any errors do occur at least some clue as to the cause is made available. In earlier versions of this standard, an obsolescent - option was described. This is no longer specified. Applications should use the -s option. Using - as a file operand now produces unspecified results. This allows implementations to continue to support the former required behavior. EXAMPLES top None. RATIONALE top The initial description of this utility was adapted from the SVID. It contains some features not found in Version 7 or BSD- derived systems. Some of the differences between the POSIX and BSD ed utilities include, but need not be limited to: * The BSD - option does not suppress the '!' prompt after a ! command. * BSD does not support the special meanings of the '%' and '!' characters within a ! command. * BSD does not support the addresses ';' and ','. * BSD allows the command/suffix pairs pp, ll, and so on, which are unspecified in this volume of POSIX.12017. * BSD does not support the '!' character part of the e, r, or w commands. * A failed g command in BSD sets the line number to the last line searched if there are no matches. * BSD does not default the command list to the p command. * BSD does not support the G, h, H, n, or V commands. * On BSD, if there is no inserted text, the insert command changes the current line to the referenced line -1; that is, the line before the specified line. * On BSD, the join command with only a single address changes the current line to that address. * BSD does not support the P command; moreover, in BSD it is synonymous with the p command. * BSD does not support the undo of the commands j, m, r, s, or t. * The Version 7 ed command W, and the BSD ed commands W, wq, and z are not present in this volume of POSIX.12017. The -s option was added to allow the functionality of the removed - option in a manner compatible with the Utility Syntax Guidelines. In early proposals there was a limit, {ED_FILE_MAX}, that described the historical limitations of some ed utilities in their handling of large files; some of these have had problems with files larger than 100000 bytes. It was this limitation that prompted much of the desire to include a split command in this volume of POSIX.12017. Since this limit was removed, this volume of POSIX.12017 requires that implementations document the file size limits imposed by ed in the conformance document. The limit {ED_LINE_MAX} was also removed; therefore, the global limit {LINE_MAX} is used for input and output lines. The manner in which the l command writes non-printable characters was changed to avoid the historical backspace-overstrike method. On video display terminals, the overstrike is ambiguous because most terminals simply replace overstruck characters, making the l format not useful for its intended purpose of unambiguously understanding the content of the line. The historical <backslash>-escapes were also ambiguous. (The string "a\0011" could represent a line containing those six characters or a line containing the three characters 'a', a byte with a binary value of 1, and a 1.) In the format required here, a <backslash> appearing in the line is written as "\\" so that the output is truly unambiguous. The method of marking the ends of lines was adopted from the ex editor and is required for any line ending in <space> characters; the '$' is placed on all lines so that a real '$' at the end of a line cannot be misinterpreted. Earlier versions of this standard allowed for implementations with bytes other than eight bits, but this has been modified in this version. The description of how a NUL is written was removed. The NUL character cannot be in text files, and this volume of POSIX.12017 should not dictate behavior in the case of undefined, erroneous input. Unlike some of the other editing utilities, the filenames accepted by the E, e, R, and r commands are not patterns. Early proposals stated that the -p option worked only when standard input was associated with a terminal device. This has been changed to conform to historical implementations, thereby allowing applications to interpose themselves between a user and the ed utility. The form of the substitute command that uses the n suffix was limited in some historical documentation (where this was described incorrectly as ``backreferencing''). This limit has been omitted because there is no reason why an editor processing lines of {LINE_MAX} length should have this restriction. The command s/x/X/2047 should be able to substitute the 2047th occurrence of 'x' on a line. The use of printing commands with printing suffixes (such as pn, lp, and so on) was made unspecified because BSD-based systems allow this, whereas System V does not. Some BSD-based systems exit immediately upon receipt of end-of- file if all of the lines in the file have been deleted. Since this volume of POSIX.12017 refers to the q command in this instance, such behavior is not allowed. Some historical implementations returned exit status zero even if command errors had occurred; this is not allowed by this volume of POSIX.12017. Some historical implementations contained a bug that allowed a single <period> to be entered in input mode as <backslash> <period> <newline>. This is not allowed by ed because there is no description of escaping any of the characters in input mode; <backslash> characters are entered into the buffer exactly as typed. The typical method of entering a single <period> has been to precede it with another character and then use the substitute command to delete that character. It is difficult under some modes of some versions of historical operating system terminal drivers to distinguish between an end- of-file condition and terminal disconnect. POSIX.12008 does not require implementations to distinguish between the two situations, which permits historical implementations of the ed utility on historical platforms to conform. Implementations are encouraged to distinguish between the two, if possible, and take appropriate action on terminal disconnect. Historically, ed accepted a zero address for the a and r commands in order to insert text at the start of the edit buffer. When the buffer was empty the command .= returned zero. POSIX.12008 requires conformance to historical practice. For consistency with the a and r commands and better user functionality, the i and c commands must also accept an address of 0, in which case 0i is treated as 1i and likewise for the c command. All of the following are valid addresses: +++ Three lines after the current line. /pattern/- One line before the next occurrence of pattern. -2 Two lines before the current line. 3 ---- 2 Line one (note the intermediate negative address). 1 2 3 Line six. Any number of addresses can be provided to commands taking addresses; for example, "1,2,3,4,5p" prints lines 4 and 5, because two is the greatest valid number of addresses accepted by the print command. This, in combination with the <semicolon> delimiter, permits users to create commands based on ordered patterns in the file. For example, the command "3;/foo/;+2p" will display the first line after line 3 that contains the pattern foo, plus the next two lines. Note that the address "3;" must still be evaluated before being discarded, because the search origin for the "/foo/" command depends on this. Historically, ed disallowed address chains, as discussed above, consisting solely of <comma> or <semicolon> separators; for example, ",,," or ";;;" were considered an error. For consistency of address specification, this restriction is removed. The following table lists some of the address forms now possible: Address Addr1 Addr2 Status Comment 7, 7 7 Historical 7,5, 5 5 Historical 7,5,9 5 9 Historical 7,9 7 9 Historical 7,+ 7 8 Historical , 1 $ Historical ,7 1 7 Extension ,, $ $ Extension ,; $ $ Extension 7; 7 7 Historical 7;5; 5 5 Historical 7;5;9 5 9 Historical 7;5,9 5 9 Historical 7;$;4 $ 4 Historical Valid, but erroneous. 7;9 7 9 Historical 7;+ 7 8 Historical ; . $ Historical ;7 . 7 Extension ;; $ $ Extension ;, $ $ Extension Historically, ed accepted the '^' character as an address, in which case it was identical to the <hyphen-minus> character. POSIX.12008 does not require or prohibit this behavior. FUTURE DIRECTIONS top None. SEE ALSO top Section 1.4, Utility Description Defaults, ex(1p), sed(1p), sh(1p), vi(1p) The Base Definitions volume of POSIX.12017, Table 5-1, Escape Sequences and Associated Actions, Chapter 8, Environment Variables, Section 9.3, Basic Regular Expressions, Chapter 11, General Terminal Interface, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 ED(1P) Pages that refer to this page: diff(1p), ex(1p), lex(1p), mailx(1p), more(1p), patch(1p), pax(1p), sed(1p), vi(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ed\n\n> The original Unix text editor.\n> See also: `awk`, `sed`.\n> More information: <https://www.gnu.org/software/ed/manual/ed_manual.html>.\n\n- Start an interactive editor session with an empty document:\n\n`ed`\n\n- Start an interactive editor session with an empty document and a specific prompt:\n\n`ed --prompt='> '`\n\n- Start an interactive editor session with user-friendly errors:\n\n`ed --verbose`\n\n- Start an interactive editor session with an empty document and without diagnostics, byte counts and '!' prompt:\n\n`ed --quiet`\n\n- Start an interactive editor session without exit status change when command fails:\n\n`ed --loose-exit-status`\n\n- Edit a specific file (this shows the byte count of the loaded file):\n\n`ed {{path/to/file}}`\n\n- Replace a string with a specific replacement for all lines:\n\n`,s/{{regular_expression}}/{{replacement}}/g`\n
edquota
edquota(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training edquota(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | SEE ALSO | COLOPHON EDQUOTA(8) System Manager's Manual EDQUOTA(8) NAME top edquota - edit user quotas SYNOPSIS top edquota [ -p protoname ] [ -u | -g | -P ] [ -rm ] [ -F format- name ] [ -f filesystem ] username | groupname | projectname... edquota [ -u | -g | -P ] [ -F format-name ] [ -f filesystem ] -t edquota [ -u | -g | -P ] [ -F format-name ] [ -f filesystem ] -T username | groupname | projectname... DESCRIPTION top edquota is a quota editor. One or more users, groups, or projects may be specified on the command line. If a number is given in the place of user/group/project name it is treated as an UID/GID/Project ID. For each user, group, or project a temporary file is created with an ASCII representation of the current disk quotas for that user, group, or project and an editor is then invoked on the file. The quotas may then be modified, new quotas added, etc. Setting a quota to zero indicates that no quota should be imposed. Block usage and limits are reported and interpreted as multiples of kibibyte (1024 bytes) blocks by default. Symbols K, M, G, and T can be appended to numeric value to express kibibytes, mebibytes, gibibytes, and tebibytes. Inode usage and limits are interpreted literally. Symbols k, m, g, and t can be appended to numeric value to express multiples of 10^3, 10^6, 10^9, and 10^12 inodes. Users are permitted to exceed their soft limits for a grace period that may be specified per filesystem. Once the grace period has expired, the soft limit is enforced as a hard limit. The current usage information in the file is for informational purposes; only the hard and soft limits can be changed. Upon leaving the editor, edquota reads the temporary file and modifies the binary quota files to reflect the changes made. The editor invoked is vi(1) unless either the EDITOR or the VISUAL environment variable specifies otherwise. Only the super-user may edit quotas. OPTIONS top -r, --remote Edit also non-local quota use rpc.rquotad on remote server to set quota. This option is available only if quota tools were compiled with enabled support for setting quotas over RPC. The -n option is equivalent, and is maintained for backward compatibility. -m, --no-mixed-pathnames Currently, pathnames of NFSv4 mountpoints are sent without leading slash in the path. rpc.rquotad uses this to recognize NFSv4 mounts and properly prepend pseudoroot of NFS filesystem to the path. If you specify this option, edquota will always send paths with a leading slash. This can be useful for legacy reasons but be aware that quota over RPC will stop working if you are using new rpc.rquotad. -u, --user Edit the user quota. This is the default. -g, --group Edit the group quota. -P, --project Edit the project quota. -p, --prototype=protoname Duplicate the quotas of the prototypical user specified for each user specified. This is the normal mechanism used to initialize quotas for groups of users. --always-resolve Always try to translate user / group name to uid / gid even if the name is composed of digits only. -F, --format=format-name Edit quota for specified format (ie. don't perform format autodetection). Possible format names are: vfsold Original quota format with 16-bit UIDs / GIDs, vfsv0 Quota format with 32-bit UIDs / GIDs, 64-bit space usage, 32-bit inode usage and limits, vfsv1 Quota format with 64-bit quota limits and usage, rpc (quota over NFS), xfs (quota on XFS filesystem) -f, --filesystem filesystem Perform specified operations only for given filesystem (default is to perform operations for all filesystems with quota). -t, --edit-period Edit the soft time limits for each filesystem. In old quota format if the time limits are zero, the default time limits in <linux/quota.h> are used. In new quota format time limits must be specified (there is no default value set in kernel). Time units of 'seconds', 'minutes', 'hours', and 'days' are understood. Time limits are printed in the greatest possible time unit such that the value is greater than or equal to one. -T, --edit-times Edit time for the user/group/project when softlimit is enforced. Possible values are 'unset' or number and unit. Units are the same as in -t option. FILES top aquota.user or aquota.group quota file at the filesystem root (version 2 quota, non- XFS filesystems) quota.user or quota.group quota file at the filesystem root (version 1 quota, non- XFS filesystems) /etc/mtab mounted filesystems table SEE ALSO top quota(1), vi(1), quotactl(2), quotacheck(8), quotaon(8), repquota(8), setquota(8) COLOPHON top This page is part of the quota (Linux Diskquota Tools) project. Information about the project can be found at [unknown -- if you know, please contact man-pages@man7.org] It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/quota/quota-tools.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2022-12-06.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org EDQUOTA(8) Pages that refer to this page: quota(1), convertquota(8), pam_setquota(8), quotacheck(8), repquota(8), rpc.rquotad(8), setquota(8), warnquota(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# edquota\n\n> Edit quotas for a user or group. By default it operates on all filesystems with quotas.\n> Quota information is stored permanently in the `quota.user` and `quota.group` files in the root of the filesystem.\n> More information: <https://manned.org/edquota>.\n\n- Edit quota of the current user:\n\n`edquota --user $(whoami)`\n\n- Edit quota of a specific user:\n\n`sudo edquota --user {{username}}`\n\n- Edit quota for a group:\n\n`sudo edquota --group {{group}}`\n\n- Restrict operations to a given filesystem (by default edquota operates on all filesystems with quotas):\n\n`sudo edquota --file-system {{filesystem}}`\n\n- Edit the default grace period:\n\n`sudo edquota -t`\n\n- Duplicate a quota to other users:\n\n`sudo edquota -p {{reference_user}} {{destination_user1}} {{destination_user2}}`\n
eject
eject(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training eject(1) Linux manual page NAME | DESCRIPTION | OPTIONS | EXIT STATUS | NOTES | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY EJECT(1) User Commands EJECT(1) NAME top eject - eject removable media eject [options] device|mountpoint DESCRIPTION top eject allows removable media (typically a CD-ROM, floppy disk, tape, JAZ, ZIP or USB disk) to be ejected under software control. The command can also control some multi-disc CD-ROM changers, the auto-eject feature supported by some devices, and close the disc tray of some CD-ROM drives. The device corresponding to device or mountpoint is ejected. If no name is specified, the default name /dev/cdrom is used. The device may be addressed by device name (e.g., 'sda'), device path (e.g., '/dev/sda'), UUID=uuid or LABEL=label tags. There are four different methods of ejecting, depending on whether the device is a CD-ROM, SCSI device, removable floppy, or tape. By default eject tries all four methods in order until it succeeds. If a device partition is specified, the whole-disk device is used. If the device or a device partition is currently mounted, it is unmounted before ejecting. The eject is processed on exclusive open block device file descriptor if --no-unmount or --force are not specified. OPTIONS top -a, --auto on|off This option controls the auto-eject mode, supported by some devices. When enabled, the drive automatically ejects when the device is closed. -c, --changerslot slot With this option a CD slot can be selected from an ATAPI/IDE CD-ROM changer. The CD-ROM drive cannot be in use (mounted data CD or playing a music CD) for a change request to work. Please also note that the first slot of the changer is referred to as 0, not 1. -d, --default List the default device name. -F, --force Force eject, dont check device type, dont open device with exclusive lock. The successful result may be false positive on non hot-pluggable devices. -f, --floppy This option specifies that the drive should be ejected using a removable floppy disk eject command. -i, --manualeject on|off This option controls locking of the hardware eject button. When enabled, the drive will not be ejected when the button is pressed. This is useful when you are carrying a laptop in a bag or case and dont want it to eject if the button is inadvertently pressed. -M, --no-partitions-unmount The option tells eject to not try to unmount other partitions on partitioned devices. If another partition is still mounted, the program will not attempt to eject the media. It will attempt to unmount only the device or mountpoint given on the command line. -m, --no-unmount The option tells eject to not try to unmount at all. If this option is not specified then eject opens the device with O_EXCL flag to be sure that the device is not used (since v2.35). -n, --noop With this option the selected device is displayed but no action is performed. -p, --proc This option allows you to use /proc/mounts instead /etc/mtab. It also passes the -n option to umount(8). -q, --tape This option specifies that the drive should be ejected using a tape drive offline command. -r, --cdrom This option specifies that the drive should be ejected using a CDROM eject command. -s, --scsi This option specifies that the drive should be ejected using SCSI commands. -T, --traytoggle With this option the drive is given a CD-ROM tray close command if its opened, and a CD-ROM tray eject command if its closed. Not all devices support this command, because it uses the above CD-ROM tray close command. -t, --trayclose With this option the drive is given a CD-ROM tray close command. Not all devices support this command. -h, --help Display help text and exit. -V, --version Print version and exit. -v, --verbose Run in verbose mode; more information is displayed about what the command is doing. -X, --listspeed With this option the CD-ROM drive will be probed to detect the available speeds. The output is a list of speeds which can be used as an argument of the -x option. This only works with Linux 2.6.13 or higher, on previous versions solely the maximum speed will be reported. Also note that some drives may not correctly report the speed and therefore this option does not work with them. -x, --cdspeed speed With this option the drive is given a CD-ROM select speed command. The speed argument is a number indicating the desired speed (e.g., 8 for 8X speed), or 0 for maximum data rate. Not all devices support this command and you can only specify speeds that the drive is capable of. Every time the media is changed this option is cleared. This option can be used alone, or with the -t and -c options. EXIT STATUS top Returns 0 if operation was successful, 1 if operation failed or command syntax was not valid. NOTES top eject only works with devices that support one or more of the four methods of ejecting. This includes most CD-ROM drives (IDE, SCSI, and proprietary), some SCSI tape drives, JAZ drives, ZIP drives (parallel port, SCSI, and IDE versions), and LS120 removable floppies. Users have also reported success with floppy drives on Sun SPARC and Apple Macintosh systems. If eject does not work, it is most likely a limitation of the kernel driver for the device and not the eject program itself. The -r, -s, -f, and -q options allow controlling which methods are used to eject. More than one method can be specified. If none of these options are specified, it tries all four (this works fine in most cases). eject may not always be able to determine if the device is mounted (e.g., if it has several names). If the device name is a symbolic link, eject will follow the link and use the device that it points to. If eject determines that the device can have multiple partitions, it will attempt to unmount all mounted partitions of the device before ejecting (see also --no-partitions-unmount). If an unmount fails, the program will not attempt to eject the media. You can eject an audio CD. Some CD-ROM drives will refuse to open the tray if the drive is empty. Some devices do not support the tray close command. If the auto-eject feature is enabled, then the drive will always be ejected after running this command. Not all Linux kernel CD-ROM drivers support the auto-eject mode. There is no way to find out the state of the auto-eject mode. You need appropriate privileges to access the device files. Running as root is required to eject some devices (e.g., SCSI devices). AUTHORS top Jeff Tranter <tranter@pobox.com> - original author, Karel Zak <kzak@redhat.com> and Michal Luscon <mluscon@redhat.com> - util-linux version. SEE ALSO top findmnt(8), lsblk(8), mount(8), umount(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The eject command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 EJECT(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# eject\n\n> Eject cds, floppy disks and tape drives.\n> More information: <https://manned.org/eject>.\n\n- Display the default device:\n\n`eject -d`\n\n- Eject the default device:\n\n`eject`\n\n- Eject a specific device (the default order is cd-rom, scsi, floppy and tape):\n\n`eject {{/dev/cdrom}}`\n\n- Toggle whether a device's tray is open or closed:\n\n`eject -T {{/dev/cdrom}}`\n\n- Eject a cd drive:\n\n`eject -r {{/dev/cdrom}}`\n\n- Eject a floppy drive:\n\n`eject -f {{/mnt/floppy}}`\n\n- Eject a tape drive:\n\n`eject -q {{/mnt/tape}}`\n
env
env(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training env(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | NOTES | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON ENV(1) User Commands ENV(1) NAME top env - run a program in a modified environment SYNOPSIS top env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...] DESCRIPTION top Set each NAME to VALUE in the environment and run COMMAND. Mandatory arguments to long options are mandatory for short options too. -i, --ignore-environment start with an empty environment -0, --null end each output line with NUL, not newline -u, --unset=NAME remove variable from the environment -C, --chdir=DIR change working directory to DIR -S, --split-string=S process and split S into separate arguments; used to pass multiple arguments on shebang lines --block-signal[=SIG] block delivery of SIG signal(s) to COMMAND --default-signal[=SIG] reset handling of SIG signal(s) to the default --ignore-signal[=SIG] set handling of SIG signal(s) to do nothing --list-signal-handling list non default signal handling to stderr -v, --debug print verbose information for each processing step --help display this help and exit --version output version information and exit A mere - implies -i. If no COMMAND, print the resulting environment. SIG may be a signal name like 'PIPE', or a signal number like '13'. Without SIG, all known signals are included. Multiple signals can be comma-separated. An empty SIG argument is a no-op. Exit status: 125 if the env command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise OPTIONS top -S/--split-string usage in scripts The -S option allows specifying multiple parameters in a script. Running a script named 1.pl containing the following first line: #!/usr/bin/env -S perl -w -T ... Will execute perl -w -T 1.pl . Without the '-S' parameter the script will likely fail with: /usr/bin/env: 'perl -w -T': No such file or directory See the full documentation for more details. --default-signal[=SIG] usage This option allows setting a signal handler to its default action, which is not possible using the traditional shell trap command. The following example ensures that seq will be terminated by SIGPIPE no matter how this signal is being handled in the process invoking the command. sh -c 'env --default-signal=PIPE seq inf | head -n1' NOTES top POSIX's exec(3p) pages says: "many existing applications wrongly assume that they start with certain signals set to the default action and/or unblocked.... Therefore, it is best not to block or ignore signals across execs without explicit reason to do so, and especially not to block signals across execs of arbitrary (not closely cooperating) programs." AUTHOR top Written by Richard Mlynarik, David MacKenzie, and Assaf Gordon. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top sigaction(2), sigprocmask(2), signal(7) Full documentation <https://www.gnu.org/software/coreutils/env> or available locally via: info '(coreutils) env invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 ENV(1) Pages that refer to this page: pmpython(1), environ(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# env\n\n> Show the environment or run a program in a modified environment.\n> More information: <https://www.gnu.org/software/coreutils/env>.\n\n- Show the environment:\n\n`env`\n\n- Run a program. Often used in scripts after the shebang (#!) for looking up the path to the program:\n\n`env {{program}}`\n\n- Clear the environment and run a program:\n\n`env -i {{program}}`\n\n- Remove variable from the environment and run a program:\n\n`env -u {{variable}} {{program}}`\n\n- Set a variable and run a program:\n\n`env {{variable}}={{value}} {{program}}`\n\n- Set one or more variables and run a program:\n\n`env {{variable1}}={{value}} {{variable2}}={{value}} {{variable3}}={{value}} {{program}}`\n
envsubst
envsubst(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training envsubst(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON ENVSUBST(1) User Commands ENVSUBST(1) NAME top envsubst - substitutes environment variables in shell format strings SYNOPSIS top envsubst [OPTION] [SHELL-FORMAT] DESCRIPTION top Substitutes the values of environment variables. Operation mode: -v, --variables output the variables occurring in SHELL-FORMAT Informative output: -h, --help display this help and exit -V, --version output version information and exit In normal operation mode, standard input is copied to standard output, with references to environment variables of the form $VARIABLE or ${VARIABLE} being replaced with the corresponding values. If a SHELL-FORMAT is given, only those environment variables that are referenced in SHELL-FORMAT are substituted; otherwise all environment variables references occurring in standard input are substituted. When --variables is used, standard input is ignored, and the output consists of the environment variables that are referenced in SHELL-FORMAT, one per line. AUTHOR top Written by Bruno Haible. REPORTING BUGS top Report bugs in the bug tracker at <https://savannah.gnu.org/projects/gettext> or by email to <bug-gettext@gnu.org>. COPYRIGHT top Copyright 2003-2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top The full documentation for envsubst is maintained as a Texinfo manual. If the info and envsubst programs are properly installed at your site, the command info envsubst should give you access to the complete manual. COLOPHON top This page is part of the gettext (message translation) project. Information about the project can be found at http://www.gnu.org/software/gettext/. If you have a bug report for this manual page, see http://savannah.gnu.org/projects/gettext/. This page was obtained from the tarball gettext-0.22.4.tar.gz fetched from https://ftp.gnu.org/gnu/gettext/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU gettext-runtime 0.22.2 September 2023 ENVSUBST(1) Pages that refer to this page: git-sh-i18n--envsubst(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# envsubst\n\n> Substitutes environment variables with their value in shell format strings.\n> Variables to be replaced should be in either `${var}` or `$var` format.\n> More information: <https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html>.\n\n- Replace environment variables in `stdin` and output to `stdout`:\n\n`echo '{{$HOME}}' | envsubst`\n\n- Replace environment variables in an input file and output to `stdout`:\n\n`envsubst < {{path/to/input_file}}`\n\n- Replace environment variables in an input file and output to a file:\n\n`envsubst < {{path/to/input_file}} > {{path/to/output_file}}`\n\n- Replace environment variables in an input file from a space-separated list:\n\n`envsubst '{{$USER $SHELL $HOME}}' < {{path/to/input_file}}`\n
eqn
eqn(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training eqn(1) Linux manual page Name | Synopsis | Description | Options | Files | MathML mode limitations | Caveats | Bugs | Examples | See also | COLOPHON eqn(1) General Commands Manual eqn(1) Name top eqn - format mathematics (equations) for groff or MathML Synopsis top eqn [-CNrR] [-d xy] [-f global-italic-font] [-m minimum-type- size] [-M eqnrc-directory] [-p super/subscript-size- reduction] [-s global-type-size] [-T device] [file ...] eqn --help eqn -v eqn --version Description top The GNU implementation of eqn is part of the groff(7) document formatting system. eqn is a troff(1) preprocessor that translates expressions in its own language, embedded in roff(7) input files, into mathematical notation typeset by troff(1). It copies each file's contents to the standard output stream, translating each equation between lines starting with .EQ and .EN, or within a pair of user-specified delimiters. Normally, eqn is not executed directly by the user, but invoked by specifying the -e option to groff(1). While GNU eqn's input syntax is highly compatible with AT&T eqn, the output eqn produces cannot be processed by AT&T troff; GNU troff (or a troff implementing relevant GNU extensions) must be used. If no file operands are present, or if file is -, eqn reads the standard input stream. Unless the -R option is used, eqn searches for the file eqnrc in the directories given with the -M option first, then in /usr/ local/share/groff/site-tmac, and finally in the standard macro directory /usr/local/share/groff/1.23.0/tmac. If it exists and is readable, eqn processes it before any input files. This man page primarily discusses the differences between GNU eqn and AT&T eqn. Most of the new features of the GNU eqn input language are based on TeX. There are some references to the differences between TeX and GNU eqn below; these may safely be ignored if you do not know TeX. Four points are worth note. GNU eqn emits Presentation MathML output when invoked with the -T MathML option. GNU eqn does not support terminal devices well, though it may suffice for simple inputs. GNU eqn sets the input token ... as an ellipsis on the text baseline, not the three centered dots of AT&T eqn. Set an ellipsis on the math axis with the GNU extension macro cdots. GNU eqn's delim command does not treat an on argument as a pair of equation delimiters. Anatomy of an equation eqn input consists of tokens. Consider a form of Newton's second law of motion. The input .EQ F = m a .EN becomes F=ma. Each of F, =, m, and a is a token. Spaces and newlines are interchangeable; they separate tokens but do not break lines or produce space in the output. Beyond their primary functions, the following input characters separate tokens as well. { } Braces perform grouping. Whereas e sup a b expresses (e to the a) times b, e sup { a b } means e to the (a times b). When immediately preceded by a left or right primitive, a brace loses its special meaning. ^ ~ are the half space and full space, respectively. Use them to tune the appearance of the output. Tab and leader characters separate tokens as well as advancing the drawing position to the next tab stop, but are seldom used in eqn input. When they occur, they must appear at the outermost lexical scope. This roughly means that they can't appear within braces that are necessary to disambiguate the input; eqn will diagnose an error in this event. (See subsection Macros below for additional token separation rules.) Other tokens are primitives, macros, an argument to either of the foregoing, or components of an equation. Primitives are fundamental keywords of the eqn language. They can configure an aspect of the preprocessor's state, as when setting a global font selection or type size (gifont and gsize), or declaring or deleting macros (define and undef); these are termed commands. Other primitives perform formatting operations on the tokens after them (as with fat, over, sqrt, or up). Equation components include mathematical variables, constants, numeric literals, and operators. eqn remaps some input character sequences to groff special character escape sequences for economy in equation entry and to ensure that glyphs from an unstyled font are used; see groff_char(7). + \[pl] ' \[fm] - \[mi] <= \[<=] = \[eq] >= \[>=] Macros permit primitives, components, and other macros to be collected and used together as a single token. Predefined macros make convenient the preparation of eqn input in a form resembling its spoken expression; for example, consider cos, hat, inf, and lim. Spacing and typeface GNU eqn imputes types to the components of an equation, adjusting the spacing between them accordingly. Recognized types are as follows; most affect spacing only, whereas the letter subtype of ordinary also assigns a style. ordinary character such as 1, a, or ! letter character to be italicized by default digit n/a operator large operator such as binary binary operator such as + relation relational operator such as = opening opening bracket such as ( closing closing bracket such as ) punctuation punctuation character such as , inner sub-formula contained within brackets suppress component to which automatic spacing is not applied Two primitives apply types to equation components. type t e Apply type t to expression e. chartype t text Assign each character in (unquoted) text type t, persistently. eqn sets up spacings and styles as if by the following commands. chartype "letter" abcdefghiklmnopqrstuvwxyz chartype "letter" ABCDEFGHIKLMNOPQRSTUVWXYZ chartype "letter" \[*a]\[*b]\[*g]\[*d]\[*e]\[*z] chartype "letter" \[*y]\[*h]\[*i]\[*k]\[*l]\[*m] chartype "letter" \[*n]\[*c]\[*o]\[*p]\[*r]\[*s] chartype "letter" \[*t]\[*u]\[*f]\[*x]\[*q]\[*w] chartype "binary" *\[pl]\[mi] chartype "relation" <>\[eq]\[<=]\[>=] chartype "opening" {([ chartype "closing" })] chartype "punctuation" ,;:. chartype "suppress" ^~ eqn assigns all other ordinary and special roff characters, including numerals 09, the ordinary type. (The digit type is not used, but is available for customization.) In keeping with common practice in mathematical typesetting, lowercase, but not uppercase, Greek letters are assigned the letter type to style them in italics. The macros for producing ellipses, ..., cdots, and ldots, use the inner type. Primitives eqn supports without alteration the AT&T eqn primitives above, back, bar, bold, define, down, fat, font, from, fwd, gfont, gsize, italic, left, lineup, mark, matrix, ndefine, over, right, roman, size, sqrt, sub, sup, tdefine, to, under, and up. New primitives We describe the GNU extension primitives type and chartype in subsection Spacing and typeface above; set and reset in subsection Customization below; and gbfont, gifont, and grfont in subsection Fonts below. In the following synopses, X can be any character not appearing in the parameter thus bracketed. e1 accent e2 Set e2 as an accent over e1. eqn assumes that e2 is at the appropriate height for a lowercase letter without an ascender, and shifts it vertically based on e1's height. For example, eqn defines hat as follows. accent { roman "^" } dotdot, dot, tilde, vec, and dyad are also defined using the accent primitive. big e Enlarge the expression e; semantics like those of CSS large are intended. In troff output, the type size is increased by 5 scaled points. MathML output emits the following. <mstyle mathsize='big'> copy file include file Interpolate the contents of file, omitting lines beginning with .EQ or .EN. If a relative path name, file is sought relative to the current working directory. ifdef name X anything X If name is defined as a primitive or macro, interpret anything. nosplit text As "text", but since text is not quoted it is subject to macro expansion; it is not split up and the spacing between characters not adjusted per subsection Spacing and typeface above. e opprime As prime, but set the prime symbol as an operator on e. In the input A opprime sub 1, the 1 is tucked under the prime as a subscript to the A (as is conventional in mathematical typesetting), whereas when prime is used, the 1 is a subscript to the prime character. The precedence of opprime is the same as that of bar and under, and higher than that of other primitives except accent and uaccent. In unquoted text, a neutral apostrophe (') that is not the first character on the input line is treated like opprime. sdefine name X anything X As define, but name is not recognized as a macro if called with arguments. e1 smallover e2 As over, but reduces the type size of e1 and e2, and puts less vertical space between e1 and e2 and the fraction bar. The over primitive corresponds to the TeX \over primitive in displayed equation styles; smallover corresponds to \over in non-display (inline) styles. space n Set extra vertical spacing around the equation, replacing the default values, where n is an integer in hundredths of an em. If positive, n increases vertical spacing before the equation; if negative, it does so after the equation. This primitive provides an interface to groff's \x escape sequence, but with the opposite sign convention. It has no effect if the equation is part of a pic(1) picture. special troff-macro e Construct an object by calling troff-macro on e. The troff string 0s contains the eqn output for e, and the registers 0w, 0h, 0d, 0skern, and 0skew the width, height, depth, subscript kern, and skew of e, respectively. (The subscript kern of an object indicates how much a subscript on that object should be tucked in, or placed to the left relative to a non-subscripted glyph of the same size. The skew of an object is how far to the right of the center of the object an accent over it should be placed.) The macro must modify 0s so that it outputs the desired result, returns the drawing position to the text baseline at the beginning of e, and updates the foregoing registers to correspond to the new dimensions of the result. Suppose you want a construct that cancels an expression by drawing a diagonal line through it. .de Ca . ds 0s \ \Z'\\*(0s'\ \v'\\n(0du'\ \D'l \\n(0wu -\\n(0hu-\\n(0du'\ \v'\\n(0hu' .. .EQ special Ca "x \[mi] 3 \[pl] x" ~ 3 .EN We use the \[mi] and \[pl] special characters instead of + and - because they are part of the argument to a troff macro, so eqn does not transform them to mathematical glyphs for us. Here's a more complicated construct that draws a box around an expression; the bottom of the box rests on the text baseline. We define the eqn macro box to wrap the call of the troff macro Bx. .de Bx .ds 0s \ \Z'\\h'1n'\\*[0s]'\ \v'\\n(0du+1n'\ \D'l \\n(0wu+2n 0'\ \D'l 0 -\\n(0hu-\\n(0du-2n'\ \D'l -\\n(0wu-2n 0'\ \D'l 0 \\n(0hu+\\n(0du+2n'\ \h'\\n(0wu+2n' .nr 0w +2n .nr 0d +1n .nr 0h +1n .. .EQ define box ' special Bx $1 ' box(foo) ~ "bar" .EN split "text" As text, but since text is quoted, it is not subject to macro expansion; it is split up and the spacing between characters adjusted per subsection Spacing and typeface above. e1 uaccent e2 Set e2 as an accent under e1. e2 is assumed to be at the appropriate height for a letter without a descender; eqn vertically shifts it depending on whether e1 has a descender. utilde is predefined using uaccent as a tilde accent below the baseline. undef name Remove definition of macro or primitive name, making it undefined. vcenter e Vertically center e about the math axis, a horizontal line upon which fraction bars and characters such as + and are aligned. MathML already behaves this way, so eqn ignores this primitive when producing that output format. The built-in sum macro is defined as if by the following. define sum ! { type "operator" vcenter size +5 \(*S } ! Extended primitives GNU eqn extends the syntax of some AT&T eqn primitives, introducing one deliberate incompatibility. delim on eqn recognizes an on argument to the delim primitive specially, restoring any delimiters previously disabled with delim off. If delimiters haven't been specified, neither command has effect. Few eqn documents are expected to use o and n as left and right delimiters, respectively. If yours does, swap them, or select others. col n { ... } ccol n { ... } lcol n { ... } rcol n { ... } pile n { ... } cpile n { ... } lpile n { ... } rpile n { ... } The integer value n, in hundredths of an em, uses the formatter's \x escape sequence to increase the vertical spacing between rows; eqn ignores it when producing MathML. Negative values are accepted but have no effect. If more than one n occurs in a matrix or pile, the largest is used. Customization When eqn generates troff input, the appearance of equations is controlled by a large number of parameters. They have no effect when generating MathML, which delegates typesetting to a MathML rendering engine. Configure these parameters with the set and reset primitives. set p n assigns parameter p the integer value n; n is interpreted in units of hundredths of an em unless otherwise stated. For example, set x_height 45 says that eqn should assume that the font's x-height is 0.45 ems. reset p restores the default value of parameter p. Available parameters p are as follows; defaults are shown in parentheses. We intend these descriptions to be expository rather than rigorous. minimum_size sets a floor for the type size (in scaled points) at which equations are set (5). fat_offset The fat primitive emboldens an equation by overprinting two copies of the equation horizontally offset by this amount (4). In MathML mode, components to which fat_offset applies instead use the following. <mstyle mathvariant='double-struck'> over_hang A fraction bar is longer by twice this amount than the maximum of the widths of the numerator and denominator; in other words, it overhangs the numerator and denominator by at least this amount (0). accent_width When bar or under is applied to a single character, the line is this long (31). Normally, bar or under produces a line whose length is the width of the object to which it applies; in the case of a single character, this tends to produce a line that looks too long. delimiter_factor Extensible delimiters produced with the left and right primitives have a combined height and depth of at least this many thousandths of twice the maximum amount by which the sub-equation that the delimiters enclose extends away from the axis (900). delimiter_shortfall Extensible delimiters produced with the left and right primitives have a combined height and depth not less than the difference of twice the maximum amount by which the sub-equation that the delimiters enclose extends away from the axis and this amount (50). null_delimiter_space This much horizontal space is inserted on each side of a fraction (12). script_space The width of subscripts and superscripts is increased by this amount (5). thin_space This amount of space is automatically inserted after punctuation characters (17). medium_space This amount of space is automatically inserted on either side of binary operators (22). thick_space This amount of space is automatically inserted on either side of relations (28). half_space configures the width of the space produced by the ^ token (17). full_space configures the width of the space produced by the ~ token (28). x_height The height of lowercase letters without ascenders such as x (45). axis_height The height above the baseline of the center of characters such as + and (26). It is important that this value is correct for the font you are using. default_rule_thickness This should be set to the thickness of the \[ru] character, or the thickness of horizontal lines produced with the \D escape sequence (4). num1 The over primitive shifts up the numerator by at least this amount (70). num2 The smallover primitive shifts up the numerator by at least this amount (36). denom1 The over primitive shifts down the denominator by at least this amount (70). denom2 The smallover primitive shifts down the denominator by at least this amount (36). sup1 Normally superscripts are shifted up by at least this amount (42). sup2 Superscripts within superscripts or upper limits or numerators of smallover fractions are shifted up by at least this amount (37). Conventionally, this is less than sup1. sup3 Superscripts within denominators or square roots or subscripts or lower limits are shifted up by at least this amount (28). Conventionally, this is less than sup2. sub1 Subscripts are normally shifted down by at least this amount (20). sub2 When there is both a subscript and a superscript, the subscript is shifted down by at least this amount (23). sup_drop The baseline of a superscript is no more than this much below the top of the object on which the superscript is set (38). sub_drop The baseline of a subscript is at least this much below the bottom of the object on which the subscript is set (5). big_op_spacing1 The baseline of an upper limit is at least this much above the top of the object on which the limit is set (11). big_op_spacing2 The baseline of a lower limit is at least this much below the bottom of the object on which the limit is set (17). big_op_spacing3 The bottom of an upper limit is at least this much above the top of the object on which the limit is set (20). big_op_spacing4 The top of a lower limit is at least this much below the bottom of the object on which the limit is set (60). big_op_spacing5 This much vertical space is added above and below limits (10). baseline_sep The baselines of the rows in a pile or matrix are normally this far apart (140). Usually equal to the sum of num1 and denom1. shift_down The midpoint between the top baseline and the bottom baseline in a matrix or pile is shifted down by this much from the axis (26). Usually equal to axis_height. column_sep This much space is added between columns in a matrix (100). matrix_side_sep This much space is added at each side of a matrix (17). draw_lines If non-zero, eqn draws lines using the troff \D escape sequence, rather than the \l escape sequence and the \[ru] special character. The eqnrc file sets the default: 1 on ps, html, and the X11 devices, otherwise 0. body_height is the presumed height of an equation above the text baseline; eqn adds any excess as extra pre-vertical line spacing with troff's \x escape sequence (85). body_depth is the presumed depth of an equation below the text baseline; eqn adds any excess as extra post-vertical line spacing with troff's \x escape sequence (35). nroff If non-zero, then ndefine behaves like define and tdefine is ignored, otherwise tdefine behaves like define and ndefine is ignored. The eqnrc file sets the default: 1 on ascii, latin1, utf8, and cp1047 devices, otherwise 0. Macros In GNU eqn, macros can take arguments. A word defined by any of the define, ndefine, or tdefine primitives followed immediately by a left parenthesis is treated as a parameterized macro call: subsequent tokens up to a matching right parenthesis are treated as comma-separated arguments. In this context only, commas and parentheses also serve as token separators. A macro argument is not terminated by a comma inside parentheses nested within it. In a macro definition, $n, where n is between 1 and 9 inclusive, is replaced by the nth argument; if there are fewer than n arguments, it is replaced by nothing. Predefined macros GNU eqn supports the predefined macros offered by AT&T eqn: and, approx, arc, cos, cosh, del, det, dot, dotdot, dyad, exp, for, grad, half, hat, if, inter, Im, inf, int, lim, ln, log, max, min, nothing, partial, prime, prod, Re, sin, sinh, sum, tan, tanh, tilde, times, union, vec, ==, !=, +=, ->, <-, <<, >>, and .... The lowercase classical Greek letters are available as alpha, beta, chi, delta, epsilon, eta, gamma, iota, kappa, lambda, mu, nu, omega, omicron, phi, pi, psi, rho, sigma, tau, theta, upsilon, xi, and zeta. Spell them with an initial capital letter (Alpha) or in full capitals (ALPHA) to obtain uppercase forms. GNU eqn further defines the macros cdot, cdots, and utilde (all discussed above), dollar, which sets a dollar sign, and ldots, which sets an ellipsis on the text baseline. Fonts eqn uses up to three typefaces to set an equation: italic (oblique), roman (upright), and bold. Assign each a groff typeface with the GNU extension primitives grfont, gifont, and gbfont. The defaults are the styles R, I, and B (applied to the current font family). The chartype primitive (see above) sets a character's type, which determines the face used to set it. The letter type is set in italics; others are set in roman. Use the bold primitive to select an (upright) bold style. gbfont f Select f as the bold font. gifont f Select f as the italic font. For AT&T eqn compatibility, gfont is recognized as a synonym for gifont. grfont f Select f as the roman font. Options top --help displays a usage message, while -v and --version show version information; all exit afterward. -C Recognize .EQ and .EN even when followed by a character other than space or newline. -d xy Specify delimiters x for left and y for right ends of equations not bracketed by .EQ/.EN. x and y need not be distinct. Any delim xy statements in the source file override this option. -f F is equivalent to gifont F. -m n is equivalent to set minimum_size n. -M dir Search dir for eqnrc before those listed in section Description above. -N Prohibit newlines within delimiters. This option allows eqn to recover better from missing closing delimiters. -p n Set sub- and superscripts n points smaller than the surrounding text. This option is deprecated. eqn normally sets sub- and superscripts at 70% of the type size of the surrounding text. -r Reduce the type size of subscripts at most once relative to the base type size for the equation. -R Don't load eqnrc. -s n is equivalent to gsize n. This option is deprecated. -T dev Prepare output for the device dev. This option defines a macro dev with the value 1; eqnrc thereby provides definitions appropriate to the device. However, if dev is MathML, eqn produces output in that language rather than roff, and eqnrc is not loaded. The default device is ps. Files top /usr/local/share/groff/1.23.0/tmac/eqnrc initializes the preprocessor. Any valid eqn input is accepted. MathML mode limitations top MathML's design assumes that it cannot know the exact physical characteristics of the media and devices on which it will be rendered. It does not support control of motions and sizes to the same degree troff does. GNU eqn's rendering parameters (see section Customziation above) have no effect on generated MathML. The special, up, down, fwd, and back primitives cannot be implemented, and yield a MathML <merror> message instead. The vcenter primitive is silently ignored, as centering on the math axis is the MathML default. Characters that eqn sets extra large in troff modenotably the integral signmay appear too small and need to have their <mstyle> wrappers adjusted by hand. As in its troff mode, eqn in MathML mode leaves the .EQ and .EN tokens in place, but emits nothing corresponding to delim delimiters. They can, however, be recognized as character sequences that begin with <math>, end with </math>, and do not cross line boundaries. Caveats top Tokens must be double-quoted in eqn input if they are not to be recognized as names of macros or primitives, or if they are to be interpreted by troff. In particular, short ones, like pi and PI, can collide with troff identifiers. For instance, the eqn command gifont PI does not select groff's Palatino italic font for the global italic face; you must use gifont "PI" instead. Delimited equations are set at the type size current at the beginning of the input line, not necessarily that immediately preceding the opening delimiter. Unlike TeX, eqn does not inherently distinguish displayed and inline equation styles; see the smallover primitive above. However, macro packages frequently define EQ and EN macros such that the equation within is displayed. These macros may accept arguments permitting the equation to be labeled or captioned; see the package's documentation. Bugs top eqn abuses terminologyits equations can be inequalities, bare expressions, or unintelligible gibberish. But there's no changing it now. In nroff mode, lowercase Greek letters are rendered in roman instead of italic style. In MathML mode, the mark and lineup features don't work. These could, in theory, be implemented with <maligngroup> elements. In MathML mode, each digit of a numeric literal gets a separate <mn></mn> pair, and decimal points are tagged with <mo></mo>. This is allowed by the specification, but inefficient. Examples top We first illustrate eqn usage with a trigonometric identity. .EQ sin ( alpha + beta ) = sin alpha cos beta + cos alpha sin beta .EN It can be convenient to set up delimiters if mathematical content will appear frequently in running text. .EQ delim $$ .EN Having cached a table of logarithms, the property $ln ( x y ) = ln x + ln y$ sped calculations. The quadratic formula affords an opportunity to use fractions, radicals, and the full space token ~. .EQ x = { - b ~ \[+-] ~ sqrt { b sup 2 - 4 a c } } over { 2 a } .EN Alternatively, we could define the plus-minus sign as a binary operator. Automatic spacing puts 0.06 em less space on either side of the plus-minus than ~ does, this being the difference between the widths of the medium_space parameter used by binary operators and that of the full space. Independently, we can define a macro frac for setting fractions. .EQ chartype "binary" \[+-] define frac ! { $1 } over { $2 } ! x = frac(- b \[+-] sqrt { b sup 2 - 4 a c }, 2 a) .EN See also top Typesetting MathematicsUser's Guide (2nd edition), by Brian W. Kernighan and Lorinda L. Cherry, 1978, AT&T Bell Laboratories Computing Science Technical Report No. 17. The TeXbook, by Donald E. Knuth, 1984, Addison-Wesley Professional. Appendix G discusses many of the parameters from section Customization above in greater detail. groff_char(7) documents a variety of special character escape sequences useful in mathematical typesetting. See subsections Logical symbols, Mathematical symbols, and Greek glyphs in particular. groff(1), troff(1), pic(1), groff_font(5) COLOPHON top This page is part of the groff (GNU troff) project. Information about the project can be found at http://www.gnu.org/software/groff/. If you have a bug report for this manual page, see http://www.gnu.org/software/groff/. This page was obtained from the project's upstream Git repository https://git.savannah.gnu.org/git/groff.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-08.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org groff 1.23.0.453-330f9-d... 22 December 2023 eqn(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# eqn\n\n> Equation preprocessor for the groff (GNU Troff) document formatting system.\n> See also `troff` and `groff`.\n> More information: <https://manned.org/eqn>.\n\n- Process input with equations, saving the output for future typesetting with groff to PostScript:\n\n`eqn {{path/to/input.eqn}} > {{path/to/output.roff}}`\n\n- Typeset an input file with equations to PDF using the [me] macro package:\n\n`eqn -T {{pdf}} {{path/to/input.eqn}} | groff -{{me}} -T {{pdf}} > {{path/to/output.pdf}}`\n
ethtool
ethtool(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ethtool(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | BUGS | AUTHOR | AVAILABILITY | COLOPHON ETHTOOL(8) System Manager's Manual ETHTOOL(8) NAME top ethtool - query or control network driver and hardware settings SYNOPSIS top ethtool devname ethtool -h|--help ethtool --version ethtool [--debug N] args ethtool [--json] args ethtool [-I | --include-statistics] args ethtool --monitor [ command ] [ devname ] ethtool -a|--show-pause devname ethtool -A|--pause devname [autoneg on|off] [rx on|off] [tx on|off] ethtool -c|--show-coalesce devname ethtool -C|--coalesce devname [adaptive-rx on|off] [adaptive-tx on|off] [rx-usecs N] [rx-frames N] [rx-usecs-irq N] [rx-frames-irq N] [tx-usecs N] [tx-frames N] [tx-usecs-irq N] [tx-frames-irq N] [stats-block-usecs N] [pkt-rate-low N] [rx-usecs-low N] [rx-frames-low N] [tx-usecs-low N] [tx-frames-low N] [pkt-rate-high N] [rx-usecs-high N] [rx-frames-high N] [tx-usecs-high N] [tx-frames-high N] [sample-interval N] [cqe-mode-rx on|off] [cqe-mode-tx on|off] [tx-aggr-max-bytes N] [tx-aggr-max-frames N] [tx-aggr-time-usecs N] ethtool -g|--show-ring devname ethtool -G|--set-ring devname [rx N] [rx-mini N] [rx-jumbo N] [tx N] [rx-buf-len N] [cqe-size N] [tx-push N] [rx-push N] [tx-push-buf-len N] ethtool -i|--driver devname ethtool -d|--register-dump devname [raw on|off] [hex on|off] [file name] ethtool -e|--eeprom-dump devname [raw on|off] [offset N] [length N] ethtool -E|--change-eeprom devname [magic N] [offset N] [length N] [value N] ethtool -k|--show-features|--show-offload devname ethtool -K|--features|--offload devname feature on|off ... ethtool -p|--identify devname [N] ethtool -P|--show-permaddr devname ethtool -r|--negotiate devname ethtool -S|--statistics devname [--all-groups|--groups [eth-phy] [eth-mac] [eth-ctrl] [rmon] ] ethtool --phy-statistics devname ethtool -t|--test devname [offline|online|external_lb] ethtool -s devname [speed N] [lanes N] [duplex half|full] [port tp|aui|bnc|mii] [mdix auto|on|off] [autoneg on|off] [advertise N[/M] | advertise mode on|off ...] [phyad N] [xcvr internal|external] [wol N[/M] | wol p|u|m|b|a|g|s|f|d...] [sopass xx:yy:zz:aa:bb:cc] [master-slave preferred-master|preferred-slave|forced- master|forced-slave] [msglvl N[/M] | msglvl type on|off ...] ethtool -n|-u|--show-nfc|--show-ntuple devname [ rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 | rule N ] ethtool -N|-U|--config-nfc|--config-ntuple devname rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 m|v|t|s|d|f|n|r... | flow-type ether|ip4|tcp4|udp4|sctp4|ah4|esp4|ip6|tcp6|udp6|ah6|esp6|sctp6 [src xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc]] [dst xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc]] [proto N [m N]] [src-ip ip-address [m ip-address]] [dst-ip ip-address [m ip-address]] [tos N [m N]] [tclass N [m N]] [l4proto N [m N]] [src-port N [m N]] [dst-port N [m N]] [spi N [m N]] [l4data N [m N]] [vlan-etype N [m N]] [vlan N [m N]] [user-def N [m N]] [dst-mac xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc]] [action N] [context N] [loc N] | delete N ethtool -w|--get-dump devname [data filename] ethtool -W|--set-dump devname N ethtool -T|--show-time-stamping devname ethtool -x|--show-rxfh-indir|--show-rxfh devname ethtool -X|--set-rxfh-indir|--rxfh devname [hkey xx:yy:zz:aa:bb:cc:...] [start N] [ equal N | weight W0 W1 ... | default ] [hfunc FUNC] [context CTX | new] [delete] ethtool -f|--flash devname file [N] ethtool -l|--show-channels devname ethtool -L|--set-channels devname [rx N] [tx N] [other N] [combined N] ethtool -m|--dump-module-eeprom|--module-info devname [raw on|off] [hex on|off] [offset N] [length N] [page N] [bank N] [i2c N] ethtool --show-priv-flags devname ethtool --set-priv-flags devname flag on|off ... ethtool --show-eee devname ethtool --set-eee devname [eee on|off] [tx-lpi on|off] [tx- timer N] [advertise N] ethtool --set-phy-tunable devname [ downshift on|off [count N] ] [ fast-link-down on|off [msecs N] ] [ energy-detect-power-down on|off [msecs N] ] ethtool --get-phy-tunable devname [downshift] [fast-link-down] [energy-detect-power-down] ethtool --get-tunable devname [rx-copybreak] [tx-copybreak] [tx- buf-size] [pfc-prevention-tout] ethtool --set-tunable devname [rx-copybreak N] [tx-copybreak N] [tx-buf-size N] [pfc-prevention-tout N] ethtool --reset devname [flags N] [mgmt] [mgmt-shared] [irq] [irq-shared] [dma] [dma-shared] [filter] [filter-shared] [offload] [offload-shared] [mac] [mac-shared] [phy] [phy- shared] [ram] [ram-shared] [ap] [ap-shared] [dedicated] [all] ethtool --show-fec devname ethtool --set-fec devname encoding auto|off|rs|baser|llrs [...] ethtool -Q|--per-queue devname [queue_mask %x] sub_command ... ethtool --cable-test devname ethtool --cable-test-tdr devname [first N] [last N] [step N] [pair N] ethtool --show-tunnels devname ethtool --show-module devname ethtool --set-module devname [power-mode-policy high|auto] ethtool --get-plca-cfg devname ethtool --set-plca-cfg devname [enable on|off] [node-id N] [node-cnt N] [to-tmr N] [burst-cnt N] [burst-tmr N] ethtool --get-plca-status devname ethtool --show-mm devname ethtool --set-mm devname [verify-enabled on|off] [verify-time N] [tx-enabled on|off] [pmac-enabled on|off] [tx-min-frag-size N] ethtool --show-pse devname ethtool --set-pse devname [podl-pse-admin-control enable|disable] DESCRIPTION top ethtool is used to query and control network device driver and hardware settings, particularly for wired Ethernet devices. devname is the name of the network device on which ethtool should operate. OPTIONS top ethtool with a single argument specifying the device name prints current settings of the specified device. -h --help Shows a short help message. --version Shows the ethtool version number. --debug N Turns on debugging messages. Argument is interpreted as a mask: 0x01 Parser information --json Output results in JavaScript Object Notation (JSON). Only a subset of options support this. Those which do not will continue to output plain text in the presence of this option. -I --include-statistics Include command-related statistics in the output. This option allows displaying relevant device statistics for selected get commands. -a --show-pause Queries the specified Ethernet device for pause parameter information. --src aggregate|emac|pmac If the MAC Merge layer is supported, request a particular source of device statistics (eMAC or pMAC, or their aggregate). Only valid if ethtool was invoked with the -I --include-statistics argument. -A --pause Changes the pause parameters of the specified Ethernet device. autoneg on|off Specifies whether pause autonegotiation should be enabled. rx on|off Specifies whether RX pause should be enabled. tx on|off Specifies whether TX pause should be enabled. -c --show-coalesce Queries the specified network device for coalescing information. -C --coalesce Changes the coalescing settings of the specified network device. -g --show-ring Queries the specified network device for rx/tx ring parameter information. -G --set-ring Changes the rx/tx ring parameters of the specified network device. rx N Changes the number of ring entries for the Rx ring. rx-mini N Changes the number of ring entries for the Rx Mini ring. rx-jumbo N Changes the number of ring entries for the Rx Jumbo ring. tx N Changes the number of ring entries for the Tx ring. rx-buf-len N Changes the size of a buffer in the Rx ring. cqe-size N Changes the size of completion queue event. tx-push on|off Specifies whether TX push should be enabled. rx-push on|off Specifies whether RX push should be enabled. tx-push-buf-len N Specifies the maximum number of bytes of a transmitted packet a driver can push directly to the underlying device -i --driver Queries the specified network device for associated driver information. -d --register-dump Retrieves and prints a register dump for the specified network device. The register format for some devices is known and decoded others are printed in hex. When raw is enabled, then ethtool dumps the raw register data to stdout. If file is specified, then use contents of previous raw register dump, rather than reading from the device. -e --eeprom-dump Retrieves and prints an EEPROM dump for the specified network device. When raw is enabled, then it dumps the raw EEPROM data to stdout. The length and offset parameters allow dumping certain portions of the EEPROM. Default is to dump the entire EEPROM. raw on|off offset N length N -E --change-eeprom If value is specified, changes EEPROM byte for the specified network device. offset and value specify which byte and it's new value. If value is not specified, stdin is read and written to the EEPROM. The length and offset parameters allow writing to certain portions of the EEPROM. Because of the persistent nature of writing to the EEPROM, a device-specific magic key must be specified to prevent the accidental writing to the EEPROM. -k --show-features --show-offload Queries the specified network device for the state of protocol offload and other features. -K --features --offload Changes the offload parameters and other features of the specified network device. The following feature names are built-in and others may be defined by the kernel. rx on|off Specifies whether RX checksumming should be enabled. tx on|off Specifies whether TX checksumming should be enabled. sg on|off Specifies whether scatter-gather should be enabled. tso on|off Specifies whether TCP segmentation offload should be enabled. ufo on|off Specifies whether UDP fragmentation offload should be enabled gso on|off Specifies whether generic segmentation offload should be enabled gro on|off Specifies whether generic receive offload should be enabled lro on|off Specifies whether large receive offload should be enabled rxvlan on|off Specifies whether RX VLAN acceleration should be enabled txvlan on|off Specifies whether TX VLAN acceleration should be enabled ntuple on|off Specifies whether Rx ntuple filters and actions should be enabled rxhash on|off Specifies whether receive hashing offload should be enabled -p --identify Initiates adapter-specific action intended to enable an operator to easily identify the adapter by sight. Typically this involves blinking one or more LEDs on the specific network port. [ N] Length of time to perform phys-id, in seconds. -P --show-permaddr Queries the specified network device for permanent hardware address. -r --negotiate Restarts auto-negotiation on the specified Ethernet device, if auto-negotiation is enabled. -S --statistics Queries the specified network device for standard (IEEE, IETF, etc.), or NIC- and driver-specific statistics. NIC- and driver-specific statistics are requested when no group of statistics is specified. NIC- and driver-specific statistics and standard statistics are independent, devices may implement either, both or none. There is little commonality between naming of NIC- and driver-specific statistics across vendors. --all-groups --groups [eth-phy] [eth-mac] [eth-ctrl] [rmon] Request groups of standard device statistics. --src aggregate|emac|pmac If the MAC Merge layer is supported, request a particular source of device statistics (eMAC or pMAC, or their aggregate). --phy-statistics Queries the specified network device for PHY specific statistics. -t --test Executes adapter selftest on the specified network device. Possible test modes are: offline Perform full set of tests, possibly interrupting normal operation during the tests, online Perform limited set of tests, not interrupting normal operation, external_lb Perform full set of tests, as for offline, and additionally an external-loopback test. -s --change Allows changing some or all settings of the specified network device. All following options only apply if -s was specified. speed N Set speed in Mb/s. ethtool with just the device name as an argument will show you the supported device speeds. lanes N Set number of lanes. duplex half|full Sets full or half duplex mode. port tp|aui|bnc|mii Selects device port. master-slave preferred-master|preferred-slave|forced- master|forced-slave Configure MASTER/SLAVE role of the PHY. When the PHY is configured as MASTER, the PMA Transmit function shall source TX_TCLK from a local clock source. When configured as SLAVE, the PMA Transmit function shall source TX_TCLK from the clock recovered from data stream provided by MASTER. Not all devices support this. preferred-master Prefer MASTER role on autonegotiation preferred-slave Prefer SLAVE role on autonegotiation forced-master Force the PHY in MASTER role. Can be used without autonegotiation forced-slave Force the PHY in SLAVE role. Can be used without autonegotiation mdix auto|on|off Selects MDI-X mode for port. May be used to override the automatic detection feature of most adapters. An argument of auto means automatic detection of MDI status, on forces MDI-X (crossover) mode, while off means MDI (straight through) mode. The driver should guarantee that this command takes effect immediately, and if necessary may reset the link to cause the change to take effect. autoneg on|off Specifies whether autonegotiation should be enabled. Autonegotiation is enabled by default, but in some network devices may have trouble with it, so you can disable it if really necessary. advertise N Sets the speed and duplex advertised by autonegotiation. The argument is a hexadecimal value using one or a combination of the following values: 0x001 10baseT Half 0x002 10baseT Full 0x100000000000000000000000 10baseT1L Full 0x8000000000000000000000000 10baseT1S Full 0x10000000000000000000000000 10baseT1S Half 0x20000000000000000000000000 10baseT1S_P2MP Half 0x004 100baseT Half 0x008 100baseT Full 0x80000000000000000 100baseT1 Full 0x40000000000000000000000 100baseFX Half 0x80000000000000000000000 100baseFX Full 0x010 1000baseT Half (not supported by IEEE standards) 0x020 1000baseT Full 0x20000 1000baseKX Full 0x20000000000 1000baseX Full 0x100000000000000000 1000baseT1 Full 0x8000 2500baseX Full (not supported by IEEE standards) 0x800000000000 2500baseT Full 0x1000000000000 5000baseT Full 0x1000 10000baseT Full 0x40000 10000baseKX4 Full 0x80000 10000baseKR Full 0x100000 10000baseR_FEC 0x40000000000 10000baseCR Full 0x80000000000 10000baseSR Full 0x100000000000 10000baseLR Full 0x200000000000 10000baseLRM Full 0x400000000000 10000baseER Full 0x200000 20000baseMLD2 Full (not supported by IEEE standards) 0x400000 20000baseKR2 Full (not supported by IEEE standards) 0x80000000 25000baseCR Full 0x100000000 25000baseKR Full 0x200000000 25000baseSR Full 0x800000 40000baseKR4 Full 0x1000000 40000baseCR4 Full 0x2000000 40000baseSR4 Full 0x4000000 40000baseLR4 Full 0x400000000 50000baseCR2 Full 0x800000000 50000baseKR2 Full 0x10000000000 50000baseSR2 Full 0x10000000000000 50000baseKR Full 0x20000000000000 50000baseSR Full 0x40000000000000 50000baseCR Full 0x80000000000000 50000baseLR_ER_FR Full 0x100000000000000 50000baseDR Full 0x8000000 56000baseKR4 Full 0x10000000 56000baseCR4 Full 0x20000000 56000baseSR4 Full 0x40000000 56000baseLR4 Full 0x1000000000 100000baseKR4 Full 0x2000000000 100000baseSR4 Full 0x4000000000 100000baseCR4 Full 0x8000000000 100000baseLR4_ER4 Full 0x200000000000000 100000baseKR2 Full 0x400000000000000 100000baseSR2 Full 0x800000000000000 100000baseCR2 Full 0x1000000000000000 100000baseLR2_ER2_FR2 Full 0x2000000000000000 100000baseDR2 Full 0x8000000000000000000 100000baseKR Full 0x10000000000000000000 100000baseSR Full 0x20000000000000000000 100000baseLR_ER_FR Full 0x40000000000000000000 100000baseCR Full 0x80000000000000000000 100000baseDR Full 0x4000000000000000 200000baseKR4 Full 0x8000000000000000 200000baseSR4 Full 0x10000000000000000 200000baseLR4_ER4_FR4 Full 0x20000000000000000 200000baseDR4 Full 0x40000000000000000 200000baseCR4 Full 0x100000000000000000000 200000baseKR2 Full 0x200000000000000000000 200000baseSR2 Full 0x400000000000000000000 200000baseLR2_ER2_FR2 Full 0x800000000000000000000 200000baseDR2 Full 0x1000000000000000000000 200000baseCR2 Full 0x200000000000000000 400000baseKR8 Full 0x400000000000000000 400000baseSR8 Full 0x800000000000000000 400000baseLR8_ER8_FR8 Full 0x1000000000000000000 400000baseDR8 Full 0x2000000000000000000 400000baseCR8 Full 0x2000000000000000000000 400000baseKR4 Full 0x4000000000000000000000 400000baseSR4 Full 0x8000000000000000000000 400000baseLR4_ER4_FR4 Full 0x10000000000000000000000 400000baseDR4 Full 0x20000000000000000000000 400000baseCR4 Full 0x200000000000000000000000 800000baseCR8 Full 0x400000000000000000000000 800000baseKR8 Full 0x800000000000000000000000 800000baseDR8 Full 0x1000000000000000000000000 800000baseDR8_2 Full 0x2000000000000000000000000 800000baseSR8 Full 0x4000000000000000000000000 800000baseVR8 Full phyad N PHY address. xcvr internal|external Selects transceiver type. Currently only internal and external can be specified, in the future further types might be added. wol p|u|m|b|a|g|s|f|d... Sets Wake-on-LAN options. Not all devices support this. The argument to this option is a string of characters specifying which options to enable. p Wake on PHY activity u Wake on unicast messages m Wake on multicast messages b Wake on broadcast messages a Wake on ARP g Wake on MagicPacket s Enable SecureOn password for MagicPacket f Wake on filter(s) d Disable (wake on nothing). This option clears all previous options. sopass xx:yy:zz:aa:bb:cc Sets the SecureOn password. The argument to this option must be 6 bytes in Ethernet MAC hex format (xx:yy:zz:aa:bb:cc). msglvl N msglvl type on|off ... Sets the driver message type flags by name or number. type names the type of message to enable or disable; N specifies the new flags numerically. The defined type names and numbers are: drv 0x0001 General driver status probe 0x0002 Hardware probing link 0x0004 Link state timer 0x0008 Periodic status check ifdown 0x0010 Interface being brought down ifup 0x0020 Interface being brought up rx_err 0x0040 Receive error tx_err 0x0080 Transmit error tx_queued 0x0100 Transmit queueing intr 0x0200 Interrupt handling tx_done 0x0400 Transmit completion rx_status 0x0800 Receive completion pktdata 0x1000 Packet contents hw 0x2000 Hardware status wol 0x4000 Wake-on-LAN status The precise meanings of these type flags differ between drivers. -n -u --show-nfc --show-ntuple Retrieves receive network flow classification options or rules. rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 Retrieves the hash options for the specified flow type. tcp4 TCP over IPv4 udp4 UDP over IPv4 ah4 IPSEC AH over IPv4 esp4 IPSEC ESP over IPv4 sctp4 SCTP over IPv4 tcp6 TCP over IPv6 udp6 UDP over IPv6 ah6 IPSEC AH over IPv6 esp6 IPSEC ESP over IPv6 sctp6 SCTP over IPv6 rule N Retrieves the RX classification rule with the given ID. -N -U --config-nfc --config-ntuple Configures receive network flow classification options or rules. rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 m|v|t|s|d|f|n|r... Configures the hash options for the specified flow type. m Hash on the Layer 2 destination address of the rx packet. v Hash on the VLAN tag of the rx packet. t Hash on the Layer 3 protocol field of the rx packet. s Hash on the IP source address of the rx packet. d Hash on the IP destination address of the rx packet. f Hash on bytes 0 and 1 of the Layer 4 header of the rx packet. n Hash on bytes 2 and 3 of the Layer 4 header of the rx packet. r Discard all packets of this flow type. When this option is set, all other options are ignored. flow-type ether|ip4|tcp4|udp4|sctp4|ah4|esp4|ip6|tcp6|udp6|ah6|esp6|sctp6 Inserts or updates a classification rule for the specified flow type. ether Ethernet ip4 Raw IPv4 tcp4 TCP over IPv4 udp4 UDP over IPv4 sctp4 SCTP over IPv4 ah4 IPSEC AH over IPv4 esp4 IPSEC ESP over IPv4 ip6 Raw IPv6 tcp6 TCP over IPv6 udp6 UDP over IPv6 sctp6 SCTP over IPv6 ah6 IPSEC AH over IPv6 esp6 IPSEC ESP over IPv6 For all fields that allow both a value and a mask to be specified, the mask may be specified immediately after the value using the m keyword, or separately using the field name keyword with -mask appended, e.g. src-mask. src xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc] Includes the source MAC address, specified as 6 bytes in hexadecimal separated by colons, along with an optional mask. Valid only for flow-type ether. dst xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc] Includes the destination MAC address, specified as 6 bytes in hexadecimal separated by colons, along with an optional mask. Valid only for flow-type ether. proto N [m N] Includes the Ethernet protocol number (ethertype) and an optional mask. Valid only for flow-type ether. src-ip ip-address [m ip-address] Specify the source IP address of the incoming packet to match along with an optional mask. Valid for all IP based flow-types. dst-ip ip-address [m ip-address] Specify the destination IP address of the incoming packet to match along with an optional mask. Valid for all IP based flow-types. tos N [m N] Specify the value of the Type of Service field in the incoming packet to match along with an optional mask. Applies to all IPv4 based flow-types. tclass N [m N] Specify the value of the Traffic Class field in the incoming packet to match along with an optional mask. Applies to all IPv6 based flow-types. l4proto N [m N] Includes the layer 4 protocol number and optional mask. Valid only for flow-types ip4 and ip6. src-port N [m N] Specify the value of the source port field (applicable to TCP/UDP packets) in the incoming packet to match along with an optional mask. Valid for flow-types ip4, tcp4, udp4, and sctp4 and their IPv6 equivalents. dst-port N [m N] Specify the value of the destination port field (applicable to TCP/UDP packets)in the incoming packet to match along with an optional mask. Valid for flow- types ip4, tcp4, udp4, and sctp4 and their IPv6 equivalents. spi N [m N] Specify the value of the security parameter index field (applicable to AH/ESP packets)in the incoming packet to match along with an optional mask. Valid for flow-types ip4, ah4, and esp4 and their IPv6 equivalents. l4data N [m N] Specify the value of the first 4 Bytes of Layer 4 in the incoming packet to match along with an optional mask. Valid for ip4 and ip6 flow-types. vlan-etype N [m N] Includes the VLAN tag Ethertype and an optional mask. vlan N [m N] Includes the VLAN tag and an optional mask. user-def N [m N] Includes 64-bits of user-specific data and an optional mask. dst-mac xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc] Includes the destination MAC address, specified as 6 bytes in hexadecimal separated by colons, along with an optional mask. Valid for all IP based flow-types. action N Specifies the Rx queue to send packets to, or some other action. -1 Drop the matched flow -2 Use the matched flow as a Wake-on-LAN filter 0 or higher Rx queue to route the flow context N Specifies the RSS context to spread packets over multiple queues; either 0 for the default RSS context, or a value returned by ethtool -X ... context new. vf N Specifies the Virtual Function the filter applies to. Not compatible with action. queue N Specifies the Rx queue to send packets to. Not compatible with action. loc N Specify the location/ID to insert the rule. This will overwrite any rule present in that location and will not go through any of the rule ordering process. delete N Deletes the RX classification rule with the given ID. -w --get-dump Retrieves and prints firmware dump for the specified network device. By default, it prints out the dump flag, version and length of the dump data. When data is indicated, then ethtool fetches the dump data and directs it to a file. -W --set-dump Sets the dump flag for the device. -T --show-time-stamping Show the device's time stamping capabilities and associated PTP hardware clock. -x --show-rxfh-indir --show-rxfh Retrieves the receive flow hash indirection table and/or RSS hash key. -X --set-rxfh-indir --rxfh Configures the receive flow hash indirection table and/or RSS hash key. hkey Sets RSS hash key of the specified network device. RSS hash key should be of device supported length. Hash key format must be in xx:yy:zz:aa:bb:cc format meaning both the nibbles of a byte should be mentioned even if a nibble is zero. hfunc Sets RSS hash function of the specified network device. List of RSS hash functions which kernel supports is shown as a part of the --show-rxfh command output. start N For the equal and weight options, sets the starting receive queue for spreading flows to N. equal N Sets the receive flow hash indirection table to spread flows evenly between the first N receive queues. weight W0 W1 ... Sets the receive flow hash indirection table to spread flows between receive queues according to the given weights. The sum of the weights must be non-zero and must not exceed the size of the indirection table. default Sets the receive flow hash indirection table to its default value. context CTX | new Specifies an RSS context to act on; either new to allocate a new RSS context, or CTX, a value returned by a previous ... context new. delete Delete the specified RSS context. May only be used in conjunction with context and a non-zero CTX value. -f --flash Write a firmware image to flash or other non-volatile memory on the device. file Specifies the filename of the firmware image. The firmware must first be installed in one of the directories where the kernel firmware loader or firmware agent will look, such as /lib/firmware. N If the device stores multiple firmware images in separate regions of non-volatile memory, this parameter may be used to specify which region is to be written. The default is 0, requesting that all regions are written. All other values are driver- dependent. -l --show-channels Queries the specified network device for the numbers of channels it has. A channel is an IRQ and the set of queues that can trigger that IRQ. -L --set-channels Changes the numbers of channels of the specified network device. rx N Changes the number of channels with only receive queues. tx N Changes the number of channels with only transmit queues. other N Changes the number of channels used only for other purposes e.g. link interrupts or SR-IOV co-ordination. combined N Changes the number of multi-purpose channels. -m --dump-module-eeprom --module-info Retrieves and if possible decodes the EEPROM from plugin modules, e.g SFP+, QSFP. If the driver and module support it, the optical diagnostic information is also read and decoded. When either one of page, bank or i2c parameters is specified, dumps only of a single page or its portion is allowed. In such a case offset and length parameters are treated relatively to EEPROM page boundaries. --show-priv-flags Queries the specified network device for its private flags. The names and meanings of private flags (if any) are defined by each network device driver. --set-priv-flags Sets the device's private flags as specified. flag on|off Sets the state of the named private flag. --show-eee Queries the specified network device for its support of Energy-Efficient Ethernet (according to the IEEE 802.3az specifications) --set-eee Sets the device EEE behaviour. eee on|off Enables/disables the device support of EEE. tx-lpi on|off Determines whether the device should assert its Tx LPI. advertise N Sets the speeds for which the device should advertise EEE capabilities. Values are as for --change advertise tx-timer N Sets the amount of time the device should stay in idle mode prior to asserting its Tx LPI (in microseconds). This has meaning only when Tx LPI is enabled. --set-phy-tunable Sets the PHY tunable parameters. downshift on|off Specifies whether downshift should be enabled. count N Sets the PHY downshift re-tries count. fast-link-down on|off Specifies whether Fast Link Down should be enabled and time until link down (if supported). msecs N Sets the period after which the link is reported as down. Note that the PHY may choose the closest supported value. Only on reading back the tunable do you get the actual value. energy-detect-power-down on|off Specifies whether Energy Detect Power Down (EDPD) should be enabled (if supported). This will put the RX and TX circuit blocks into a low power mode, and the PHY will wake up periodically to send link pulses to avoid any lock-up situation with a peer PHY that may also have EDPD enabled. By default, this setting will also enable the periodic transmission of TX pulses. msecs N Some PHYs support configuration of the wake-up interval to send TX pulses. This setting allows the control of this interval, and 0 disables TX pulses if the PHY supports this. Disabling TX pulses can create a lock-up situation where neither of the PHYs wakes the other one. If unspecified the default value (in milliseconds) will be used by the PHY. --get-phy-tunable Gets the PHY tunable parameters. downshift For operation in cabling environments that are incompatible with 1000BASE-T, PHY device provides an automatic link speed downshift operation. Link speed downshift after N failed 1000BASE-T auto-negotiation attempts. Downshift is useful where cable does not have the 4 pairs instance. Gets the PHY downshift count/status. fast-link-down Depending on the mode it may take 0.5s - 1s until a broken link is reported as down. In certain use cases a link-down event needs to be reported as soon as possible. Some PHYs support a Fast Link Down Feature and may allow configuration of the delay before a broken link is reported as being down. Gets the PHY Fast Link Down status / period. energy-detect-power-down Gets the current configured setting for Energy Detect Power Down (if supported). --get-tunable Get the tunable parameters. rx-copybreak Get the current rx copybreak value in bytes. tx-copybreak Get the current tx copybreak value in bytes. tx-buf-size Get the current tx copybreak buffer size in bytes. pfc-prevention-tout Get the current pfc prevention timeout value in msecs. --set-tunable Set driver's tunable parameters. rx-copybreak N Set the rx copybreak value in bytes. tx-copybreak N Set the tx copybreak value in bytes. tx-buf-size N Set the tx copybreak buffer size in bytes. pfc-prevention-tout N Set pfc prevention timeout in msecs. Value of 0 means disable and 65535 means auto. --reset Reset hardware components specified by flags and components listed below flags N Resets the components based on direct flags mask mgmt Management processor irq Interrupt requester dma DMA engine filter Filtering/flow direction offload Protocol offload mac Media access controller phy Transceiver/PHY ram RAM shared between multiple components ap Application Processor dedicated All components dedicated to this interface all All components used by this interface, even if shared --show-fec Queries the specified network device for its support of Forward Error Correction. --set-fec Configures Forward Error Correction for the specified network device. Forward Error Correction modes selected by a user are expected to be persisted after any hotplug events. If a module is swapped that does not support the current FEC mode, the driver or firmware must take the link down administratively and report the problem in the system logs for users to correct. encoding auto|off|rs|baser|llrs [...] Sets the FEC encoding for the device. Combinations of options are specified as e.g. encoding auto rs ; the semantics of such combinations vary between drivers. auto Use the driver's default encoding off Turn off FEC RS Force RS-FEC encoding BaseR Force BaseR encoding LLRS Force LLRS-FEC encoding -Q|--per-queue Applies provided sub command to specific queues. queue_mask %x Sets the specific queues which the sub command is applied to. If queue_mask is not set, the sub command will be applied to all queues. sub_command Sub command to apply. The supported sub commands include --show-coalesce and --coalesce. --cable-test Perform a cable test and report the results. What results are returned depends on the capabilities of the network interface. Typically open pairs and shorted pairs can be reported, along with pairs being O.K. When a fault is detected the approximate distance to the fault may be reported. --cable-test-tdr Perform a cable test and report the raw Time Domain Reflectometer data. A pulse is sent down a cable pair and the amplitude of the reflection, for a given distance, is reported. A break in the cable returns a big reflection. Minor damage to the cable returns a small reflection. If the cable is shorted, the amplitude of the reflection can be negative. By default, data is returned for lengths between 0 and 150m at 1m steps, for all pairs. However parameters can be passed to restrict the collection of data. It should be noted, that the interface will round the distances to whatever granularity is actually implemented. This is often 0.8 of a meter. The results should include the actual rounded first and last distance and step size. first N Distance along the cable, in meters, where the first measurement should be made. last N Distance along the cable, in meters, where the last measurement should be made. step N Distance, in meters, between each measurement. pair N Which pair should be measured. Typically a cable has 4 pairs. 0 = Pair A, 1 = Pair B, ... --monitor Listens to netlink notification and displays them. command If argument matching a command is used, ethtool only shows notifications of this type. Without such argument or with --all, all notification types are shown. devname If a device name is used as argument, only notification for this device are shown. Default is to show notifications for all devices. --show-tunnels Show tunnel-related device capabilities and state. List UDP ports kernel has programmed the device to parse as VxLAN, or GENEVE tunnels. --show-module Show the transceiver module's parameters. --set-module Set the transceiver module's parameters. power-mode-policy high|auto Set the power mode policy for the module. When set to high, the module always operates at high power mode. When set to auto, the module is transitioned by the host to high power mode when the first port using it is put administratively up and to low power mode when the last port using it is put administratively down. The power mode policy can be set before a module is plugged-in. --get-plca-cfg Show the current PLCA parameters for the given interface. --set-plca-cfg Change the PLCA settings for the given interface. enable on|off Enables or disables the PLCA function. When the PLCA RS is disabled (default), the PHY operates in plain CSMA/CD mode. To enable PLCA, the PHY must be assigned a unique plca-id other than 255. This one can be configured concurrently with the enable parameter. The enable parameter maps to IEEE 802.3cg-2019 clause 30.16.1.1.1 (aPLCAAdminState) and clause 30.16.1.2.1 (acPLCAAdminControl). node-id N The unique node identifier in the range [0 .. 255]. Node ID 0 is reserved for the coordinator node, the one generating the BEACON signal. There must be exactly one coordinator on a PLCA network. Setting the node ID to 255 (default) disables the node. This parameter maps to IEEE 802.3cg-2019 clause 30.16.1.1.4 (aPLCALocalNodeID). node-cnt N The node-cnt [1 .. 255] should be set after the maximum number of nodes that can be plugged to the multi-drop network. This parameter regulates the minimum length of the PLCA cycle. Therefore, it is only meaningful for the coordinator node (nod-id = 0). Setting this parameter on a follower node has no effect. The node-cnt parameter maps to IEEE 802.3cg-2019 clause 30.16.1.1.3 (aPLCANodeCount). to-tmr N The TO timer parameter sets the value of the transmit opportunity timer in bit-times, and shall be set equal across all the nodes sharing the same medium for PLCA to work. The default value of 32 is enough to cover a link of roughly 50 mt. This parameter maps to IEEE 802.3cg-2019 clause 30.16.1.1.5 (aPLCATransmitOpportunityTimer). burst-cnt N The burst-cnt parameter [0 .. 255] indicates the extra number of packets that the node is allowed to send during a single transmit opportunity. By default, this attribute is 0, meaning that the node can send a sigle frame per TO. When greater than 0, the PLCA RS keeps the TO after any transmission, waiting for the MAC to send a new frame for up to burst-tmr BTs. This can only happen a number of times per PLCA cycle up to the value of this parameter. After that, the burst is over and the normal counting of TOs resumes. This parameter maps to IEEE 802.3cg-2019 clause 30.16.1.1.6 (aPLCAMaxBurstCount). burst-tmr N The burst-tmr parameter [0 .. 255] sets how many bit- times the PLCA RS waits for the MAC to initiate a new transmission when burst-cnt is greater than 0. If the MAC fails to send a new frame within this time, the burst ends and the counting of TOs resumes. Otherwise, the new frame is sent as part of the current burst. This parameter maps to IEEE 802.3cg-2019 clause 30.16.1.1.7 (aPLCABurstTimer). The value of burst-tmr should be set greater than the Inter-Frame-Gap (IFG) time of the MAC (plus some margin) for PLCA burst mode to work as intended. --get-plca-status Show the current PLCA status for the given interface. If on, the PHY is successfully receiving or generating the BEACON signal. If off, the PLCA function is temporarily disabled and the PHY is operating in plain CSMA/CD mode. --show-mm Show the MAC Merge layer state. The ethtool argument -I --include-statistics can be used with this command, and MAC Merge layer statistics counters will also be retrieved. pmac-enabled Shows whether the pMAC is enabled and capable of receiving traffic and SMD-V frames (and responding to them with SMD-R replies). tx-enabled Shows whether transmission on the pMAC is administratively enabled. tx-active Shows whether transmission on the pMAC is active (verification is either successful, or was disabled). tx-min-frag-size Shows the minimum size (in octets) of transmitted non- final fragments which can be received by the link partner. Corresponds to the standard addFragSize variable using the formula: tx-min-frag-size = 64 * (1 + addFragSize) - 4 rx-min-frag-size Shows the minimum size (in octets) of non-final fragments which the local device supports receiving. verify-enabled Shows whether the verification state machine is enabled. This process, if successful, ensures that preemptible frames transmitted by the local device will not be dropped as error frames by the link partner. verify-time Shows the interval in ms between verification attempts, represented as an integer between 1 and 128 ms. The standard defines a fixed number of verification attempts (verifyLimit) before failing the verification process. max-verify-time Shows the maximum value for verify-time accepted by the local device, which may be less than 128 ms. verify-status Shows the current state of the verification state machine of the local device. Values can be INITIAL, VERIFYING, SUCCEEDED, FAILED or DISABLED. --set-mm Set the MAC Merge layer parameters. pmac-enabled on|off Enable reception for the pMAC. tx-enabled on|off Administatively enable transmission for the pMAC. tx-min-frag-size N Set the minimum size (in octets) of transmitted non- final fragments which can be received by the link partner. verify-enabled on|off Enable or disable the verification state machine. verify-time N Set the interval in ms between verification attempts. --show-pse Show the current Power Sourcing Equipment (PSE) status for the given interface. podl-pse-admin-state This attribute indicates the operational status of PoDL PSE functions, which can be modified using the podl-pse-admin-control parameter. It corresponds to IEEE 802.3-2018 30.15.1.1.2 (aPoDLPSEAdminState), with potential values being enabled, disabled podl-pse-power-detection-status This attribute indicates the power detection status of the PoDL PSE. The status depend on internal PSE state machine and automatic PD classification support. It corresponds to IEEE 802.3-2018 30.15.1.1.3 (aPoDLPSEPowerDetectionStatus) with potential values being disabled, searching, delivering power, sleep, idle, error --set-pse Set Power Sourcing Equipment (PSE) parameters. podl-pse-admin-control enable|disable This parameter manages PoDL PSE Admin operations in accordance with the IEEE 802.3-2018 30.15.1.2.1 (acPoDLPSEAdminControl) specification. BUGS top Not supported (in part or whole) on all network drivers. AUTHOR top ethtool was written by David Miller. Modifications by Jeff Garzik, Tim Hockin, Jakub Jelinek, Andre Majorel, Eli Kupermann, Scott Feldman, Andi Kleen, Alexander Duyck, Sucheta Chakraborty, Jesse Brandeburg, Ben Hutchings, Scott Branden. AVAILABILITY top ethtool is available from http://www.kernel.org/pub/software/network/ethtool/ COLOPHON top This page is part of the ethtool (utility for controlling network drivers and hardware) project. Information about the project can be found at https://www.kernel.org/pub/software/network/ethtool/. If you have a bug report for this manual page, send it to bwh@kernel.org, netdev@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/network/ethtool/ethtool.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-23.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Ethtool version 6.6 November 2023 ETHTOOL(8) Pages that refer to this page: veth(4), ip-link(8), ovs-l3ping(8), ovs-test(8), ovs-vlan-test(8), tc-mqprio(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ethtool\n\n> Display and modify Network Interface Controller (NIC) parameters.\n> More information: <http://man7.org/linux/man-pages/man8/ethtool.8.html>.\n\n- Display the current settings for an interface:\n\n`ethtool {{eth0}}`\n\n- Display the driver information for an interface:\n\n`ethtool --driver {{eth0}}`\n\n- Display all supported features for an interface:\n\n`ethtool --show-features {{eth0}}`\n\n- Display the network usage statistics for an interface:\n\n`ethtool --statistics {{eth0}}`\n\n- Blink one or more LEDs on an interface for 10 seconds:\n\n`ethtool --identify {{eth0}} {{10}}`\n\n- Set the link speed, duplex mode, and parameter auto-negotiation for a given interface:\n\n`ethtool -s {{eth0}} speed {{10|100|1000}} duplex {{half|full}} autoneg {{on|off}}`\n
eval
eval(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training eval(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT EVAL(1P) POSIX Programmer's Manual EVAL(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top eval construct command by concatenating arguments SYNOPSIS top eval [argument...] DESCRIPTION top The eval utility shall construct a command by concatenating arguments together, separating each with a <space> character. The constructed command shall be read and executed by the shell. OPTIONS top None. OPERANDS top See the DESCRIPTION. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top None. ASYNCHRONOUS EVENTS top Default. STDOUT top Not used. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top If there are no arguments, or only null arguments, eval shall return a zero exit status; otherwise, it shall return the exit status of the command defined by the string of concatenated arguments separated by <space> characters, or a non-zero exit status if the concatenation could not be parsed as a command and the shell is interactive (and therefore did not abort). CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top Since eval is not required to recognize the "--" end of options delimiter, in cases where the argument(s) to eval might begin with '-' it is recommended that the first argument is prefixed by a string that will not alter the commands to be executed, such as a <space> character: eval " $commands" or: eval " $(some_command)" EXAMPLES top foo=10 x=foo y='$'$x echo $y $foo eval y='$'$x echo $y 10 RATIONALE top This standard allows, but does not require, eval to recognize "--". Although this means applications cannot use "--" to protect against options supported as an extension (or errors reported for unsupported options), the nature of the eval utility is such that other means can be used to provide this protection (see APPLICATION USAGE above). FUTURE DIRECTIONS top None. SEE ALSO top Section 2.14, Special Built-In Utilities COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 EVAL(1P) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# eval\n\n> Execute arguments as a single command in the current shell and return its result.\n> More information: <https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#eval>.\n\n- Call `echo` with the "foo" argument:\n\n`eval "{{echo foo}}"`\n\n- Set a variable in the current shell:\n\n`eval "{{foo=bar}}"`\n
ex
ex(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ex(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT EX(1P) POSIX Programmer's Manual EX(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top ex text editor SYNOPSIS top ex [-rR] [-s|-v] [-c command] [-t tagstring] [-w size] [file...] DESCRIPTION top The ex utility is a line-oriented text editor. There are two other modes of the editoropen and visualin which screen- oriented editing is available. This is described more fully by the ex open and visual commands and in vi(1p). If an operand is '-', the results are unspecified. This section uses the term edit buffer to describe the current working text. No specific implementation is implied by this term. All editing changes are performed on the edit buffer, and no changes to it shall affect any file until an editor command writes the file. Certain terminals do not have all the capabilities necessary to support the complete ex definition, such as the full-screen editing commands (visual mode or open mode). When these commands cannot be supported on such terminals, this condition shall not produce an error message such as ``not an editor command'' or report a syntax error. The implementation may either accept the commands and produce results on the screen that are the result of an unsuccessful attempt to meet the requirements of this volume of POSIX.12017 or report an error describing the terminal- related deficiency. OPTIONS top The ex utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines, except for the unspecified usage of '-', and that '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c command Specify an initial command to be executed in the first edit buffer loaded from an existing file (see the EXTENDED DESCRIPTION section). Implementations may support more than a single -c option. In such implementations, the specified commands shall be executed in the order specified on the command line. -r Recover the named files (see the EXTENDED DESCRIPTION section). Recovery information for a file shall be saved during an editor or system crash (for example, when the editor is terminated by a signal which the editor can catch), or after the use of an ex preserve command. A crash in this context is an unexpected failure of the system or utility that requires restarting the failed system or utility. A system crash implies that any utilities running at the time also crash. In the case of an editor or system crash, the number of changes to the edit buffer (since the most recent preserve command) that will be recovered is unspecified. If no file operands are given and the -t option is not specified, all other options, the EXINIT variable, and any .exrc files shall be ignored; a list of all recoverable files available to the invoking user shall be written, and the editor shall exit normally without further action. -R Set readonly edit option. -s Prepare ex for batch use by taking the following actions: * Suppress writing prompts and informational (but not diagnostic) messages. * Ignore the value of TERM and any implementation default terminal type and assume the terminal is a type incapable of supporting open or visual modes; see the visual command and the description of vi(1p). * Suppress the use of the EXINIT environment variable and the reading of any .exrc file; see the EXTENDED DESCRIPTION section. * Suppress autoindentation, ignoring the value of the autoindent edit option. -t tagstring Edit the file containing the specified tagstring; see ctags(1p). The tags feature represented by -t tagstring and the tag command is optional. It shall be provided on any system that also provides a conforming implementation of ctags; otherwise, the use of -t produces undefined results. On any system, it shall be an error to specify more than a single -t option. -v Begin in visual mode (see vi(1p)). -w size Set the value of the window editor option to size. OPERANDS top The following operand shall be supported: file A pathname of a file to be edited. STDIN top The standard input consists of a series of commands and input text, as described in the EXTENDED DESCRIPTION section. The implementation may limit each line of standard input to a length of {LINE_MAX}. If the standard input is not a terminal device, it shall be as if the -s option had been specified. If a read from the standard input returns an error, or if the editor detects an end-of-file condition from the standard input, it shall be equivalent to a SIGHUP asynchronous event. INPUT FILES top Input files shall be text files or files that would be text files except for an incomplete last line that is not longer than {LINE_MAX}-1 bytes in length and contains no NUL characters. By default, any incomplete last line shall be treated as if it had a trailing <newline>. The editing of other forms of files may optionally be allowed by ex implementations. The .exrc files and source files shall be text files consisting of ex commands; see the EXTENDED DESCRIPTION section. By default, the editor shall read lines from the files to be edited without interpreting any of those lines as any form of editor command. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of ex: COLUMNS Override the system-selected horizontal screen size. See the Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables for valid values and results when it is unset or null. EXINIT Determine a list of ex commands that are executed on editor start-up. See the EXTENDED DESCRIPTION section for more details of the initialization phase. HOME Determine a pathname of a directory that shall be searched for an editor start-up file named .exrc; see the EXTENDED DESCRIPTION section. LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_COLLATE Determine the locale for the behavior of ranges, equivalence classes, and multi-character collating elements within regular expressions. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files), the behavior of character classes within regular expressions, the classification of characters as uppercase or lowercase letters, the case conversion of letters, and the detection of word boundaries. LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. LINES Override the system-selected vertical screen size, used as the number of lines in a screenful and the vertical screen size in visual mode. See the Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables for valid values and results when it is unset or null. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. PATH Determine the search path for the shell command specified in the ex editor commands !, shell, read, and write, and the open and visual mode command !; see the description of command search and execution in Section 2.9.1.1, Command Search and Execution. SHELL Determine the preferred command line interpreter for use as the default value of the shell edit option. TERM Determine the name of the terminal type. If this variable is unset or null, an unspecified default terminal type shall be used. ASYNCHRONOUS EVENTS top The following term is used in this and following sections to specify command and asynchronous event actions: complete write A complete write is a write of the entire contents of the edit buffer to a file of a type other than a terminal device, or the saving of the edit buffer caused by the user executing the ex preserve command. Writing the contents of the edit buffer to a temporary file that will be removed when the editor exits shall not be considered a complete write. The following actions shall be taken upon receipt of signals: SIGINT If the standard input is not a terminal device, ex shall not write the file or return to command or text input mode, and shall exit with a non-zero exit status. Otherwise, if executing an open or visual text input mode command, ex in receipt of SIGINT shall behave identically to its receipt of the <ESC> character. Otherwise: 1. If executing an ex text input mode command, all input lines that have been completely entered shall be resolved into the edit buffer, and any partially entered line shall be discarded. 2. If there is a currently executing command, it shall be aborted and a message displayed. Unless otherwise specified by the ex or vi command descriptions, it is unspecified whether any lines modified by the executing command appear modified, or as they were before being modified by the executing command, in the buffer. If the currently executing command was a motion command, its associated command shall be discarded. 3. If in open or visual command mode, the terminal shall be alerted. 4. The editor shall then return to command mode. SIGCONT The screen shall be refreshed if in open or visual mode. SIGHUP If the edit buffer has been modified since the last complete write, ex shall attempt to save the edit buffer so that it can be recovered later using the -r option or the ex recover command. The editor shall not write the file or return to command or text input mode, and shall terminate with a non-zero exit status. SIGTERM Refer to SIGHUP. The action taken for all other signals is unspecified. STDOUT top The standard output shall be used only for writing prompts to the user, for informational messages, and for writing lines from the file. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top The output from ex shall be text files. EXTENDED DESCRIPTION top Only the ex mode of the editor is described in this section. See vi(1p) for additional editing capabilities available in ex. When an error occurs, ex shall write a message. If the terminal supports a standout mode (such as inverse video), the message shall be written in standout mode. If the terminal does not support a standout mode, and the edit option errorbells is set, an alert action shall precede the error message. By default, ex shall start in command mode, which shall be indicated by a : prompt; see the prompt command. Text input mode can be entered by the append, insert, or change commands; it can be exited (and command mode re-entered) by typing a <period> ('.') alone at the beginning of a line. Initialization in ex and vi The following symbols are used in this and following sections to specify locations in the edit buffer: alternate and current pathnames Two pathnames, named current and alternate, are maintained by the editor. Any ex commands that take filenames as arguments shall set them as follows: 1. If a file argument is specified to the ex edit, ex, or recover commands, or if an ex tag command replaces the contents of the edit buffer. a. If the command replaces the contents of the edit buffer, the current pathname shall be set to the file argument or the file indicated by the tag, and the alternate pathname shall be set to the previous value of the current pathname. b. Otherwise, the alternate pathname shall be set to the file argument. 2. If a file argument is specified to the ex next command: a. If the command replaces the contents of the edit buffer, the current pathname shall be set to the first file argument, and the alternate pathname shall be set to the previous value of the current pathname. 3. If a file argument is specified to the ex file command, the current pathname shall be set to the file argument, and the alternate pathname shall be set to the previous value of the current pathname. 4. If a file argument is specified to the ex read and write commands (that is, when reading or writing a file, and not to the program named by the shell edit option), or a file argument is specified to the ex xit command: a. If the current pathname has no value, the current pathname shall be set to the file argument. b. Otherwise, the alternate pathname shall be set to the file argument. If the alternate pathname is set to the previous value of the current pathname when the current pathname had no previous value, then the alternate pathname shall have no value as a result. current line The line of the edit buffer referenced by the cursor. Each command description specifies the current line after the command has been executed, as the current line value. When the edit buffer contains no lines, the current line shall be zero; see Addressing in ex. current column The current display line column occupied by the cursor. (The columns shall be numbered beginning at 1.) Each command description specifies the current column after the command has been executed, as the current column value. This column is an ideal column that is remembered over the lifetime of the editor. The actual display line column upon which the cursor rests may be different from the current column; see the cursor positioning discussion in Command Descriptions in vi. set to non-<blank> A description for a current column value, meaning that the current column shall be set to the last display line column on which is displayed any part of the first non-<blank> of the line. If the line has no non-<blank> non-<newline> characters, the current column shall be set to the last display line column on which is displayed any part of the last non-<newline> character in the line. If the line is empty, the current column shall be set to column position 1. The length of lines in the edit buffer may be limited to {LINE_MAX} bytes. In open and visual mode, the length of lines in the edit buffer may be limited to the number of characters that will fit in the display. If either limit is exceeded during editing, an error message shall be written. If either limit is exceeded by a line read in from a file, an error message shall be written and the edit session may be terminated. If the editor stops running due to any reason other than a user command, and the edit buffer has been modified since the last complete write, it shall be equivalent to a SIGHUP asynchronous event. If the system crashes, it shall be equivalent to a SIGHUP asynchronous event. During initialization (before the first file is copied into the edit buffer or any user commands from the terminal are processed) the following shall occur: 1. If the environment variable EXINIT is set, the editor shall execute the ex commands contained in that variable. 2. If the EXINIT variable is not set, and all of the following are true: a. The HOME environment variable is not null and not empty. b. The file .exrc in the directory referred to by the HOME environment variable: i. Exists ii. Is owned by the same user ID as the real user ID of the process or the process has appropriate privileges iii. Is not writable by anyone other than the owner the editor shall execute the ex commands contained in that file. 3. If and only if all of the following are true: a. The current directory is not referred to by the HOME environment variable. b. A command in the EXINIT environment variable or a command in the .exrc file in the directory referred to by the HOME environment variable sets the editor option exrc. c. The .exrc file in the current directory: i. Exists ii. Is owned by the same user ID as the real user ID of the process, or by one of a set of implementation- defined user IDs iii. Is not writable by anyone other than the owner the editor shall attempt to execute the ex commands contained in that file. Lines in any .exrc file that are blank lines shall be ignored. If any .exrc file exists, but is not read for ownership or permission reasons, it shall be an error. After the EXINIT variable and any .exrc files are processed, the first file specified by the user shall be edited, as follows: 1. If the user specified the -t option, the effect shall be as if the ex tag command was entered with the specified argument, with the exception that if tag processing does not result in a file to edit, the effect shall be as described in step 3. below. 2. Otherwise, if the user specified any command line file arguments, the effect shall be as if the ex edit command was entered with the first of those arguments as its file argument. 3. Otherwise, the effect shall be as if the ex edit command was entered with a nonexistent filename as its file argument. It is unspecified whether this action shall set the current pathname. In an implementation where this action does not set the current pathname, any editor command using the current pathname shall fail until an editor command sets the current pathname. If the -r option was specified, the first time a file in the initial argument list or a file specified by the -t option is edited, if recovery information has previously been saved about it, that information shall be recovered and the editor shall behave as if the contents of the edit buffer have already been modified. If there are multiple instances of the file to be recovered, the one most recently saved shall be recovered, and an informational message that there are previous versions of the file that can be recovered shall be written. If no recovery information about a file is available, an informational message to this effect shall be written, and the edit shall proceed as usual. If the -c option was specified, the first time a file that already exists (including a file that might not exist but for which recovery information is available, when the -r option is specified) replaces or initializes the contents of the edit buffer, the current line shall be set to the last line of the edit buffer, the current column shall be set to non-<blank>, and the ex commands specified with the -c option shall be executed. In this case, the current line and current column shall not be set as described for the command associated with the replacement or initialization of the edit buffer contents. However, if the -t option or a tag command is associated with this action, the -c option commands shall be executed and then the movement to the tag shall be performed. The current argument list shall initially be set to the filenames specified by the user on the command line. If no filenames are specified by the user, the current argument list shall be empty. If the -t option was specified, it is unspecified whether any filename resulting from tag processing shall be prepended to the current argument list. In the case where the filename is added as a prefix to the current argument list, the current argument list reference shall be set to that filename. In the case where the filename is not added as a prefix to the current argument list, the current argument list reference shall logically be located before the first of the filenames specified on the command line (for example, a subsequent ex next command shall edit the first filename from the command line). If the -t option was not specified, the current argument list reference shall be to the first of the filenames on the command line. Addressing in ex Addressing in ex relates to the current line and the current column; the address of a line is its 1-based line number, the address of a column is its 1-based count from the beginning of the line. Generally, the current line is the last line affected by a command. The current line number is the address of the current line. In each command description, the effect of the command on the current line number and the current column is described. Addresses are constructed as follows: 1. The character '.' (period) shall address the current line. 2. The character '$' shall address the last line of the edit buffer. 3. The positive decimal number n shall address the nth line of the edit buffer. 4. The address "'x" refers to the line marked with the mark name character 'x', which shall be a lowercase letter from the portable character set, the backquote character, or the single-quote character. It shall be an error if the line that was marked is not currently present in the edit buffer or the mark has not been set. Lines can be marked with the ex mark or k commands, or the vi m command. 5. A regular expression enclosed by <slash> characters ('/') shall address the first line found by searching forwards from the line following the current line toward the end of the edit buffer and stopping at the first line for which the line excluding the terminating <newline> matches the regular expression. As stated in Regular Expressions in ex, an address consisting of a null regular expression delimited by <slash> characters ("//") shall address the next line for which the line excluding the terminating <newline> matches the last regular expression encountered. In addition, the second <slash> can be omitted at the end of a command line. If the wrapscan edit option is set, the search shall wrap around to the beginning of the edit buffer and continue up to and including the current line, so that the entire edit buffer is searched. Within the regular expression, the sequence "\/" shall represent a literal <slash> instead of the regular expression delimiter. 6. A regular expression enclosed in <question-mark> characters ('?') shall address the first line found by searching backwards from the line preceding the current line toward the beginning of the edit buffer and stopping at the first line for which the line excluding the terminating <newline> matches the regular expression. An address consisting of a null regular expression delimited by <question-mark> characters ("??") shall address the previous line for which the line excluding the terminating <newline> matches the last regular expression encountered. In addition, the second <question-mark> can be omitted at the end of a command line. If the wrapscan edit option is set, the search shall wrap around from the beginning of the edit buffer to the end of the edit buffer and continue up to and including the current line, so that the entire edit buffer is searched. Within the regular expression, the sequence "\?" shall represent a literal <question-mark> instead of the RE delimiter. 7. A <plus-sign> ('+') or a <hyphen-minus> ('-') followed by a decimal number shall address the current line plus or minus the number. A '+' or '-' not followed by a decimal number shall address the current line plus or minus 1. Addresses can be followed by zero or more address offsets, optionally <blank>-separated. Address offsets are constructed as follows: 1. A '+' or '-' immediately followed by a decimal number shall add (subtract) the indicated number of lines to (from) the address. A '+' or '-' not followed by a decimal number shall add (subtract) 1 to (from) the address. 2. A decimal number shall add the indicated number of lines to the address. It shall not be an error for an intermediate address value to be less than zero or greater than the last line in the edit buffer. It shall be an error for the final address value to be less than zero or greater than the last line in the edit buffer. Commands take zero, one, or two addresses; see the descriptions of 1addr and 2addr in Command Descriptions in ex. If more than the required number of addresses are provided to a command that requires zero addresses, it shall be an error. Otherwise, if more than the required number of addresses are provided to a command, the addresses specified first shall be evaluated and then discarded until the maximum number of valid addresses remain. Addresses shall be separated from each other by a <comma> (',') or a <semicolon> (';'). If no address is specified before or after a <comma> or <semicolon> separator, it shall be as if the address of the current line was specified before or after the separator. In the case of a <semicolon> separator, the current line ('.') shall be set to the first address, and only then will the next address be calculated. This feature can be used to determine the starting line for forwards and backwards searches (see rules 5. and 6.). A <percent-sign> ('%') shall be equivalent to entering the two addresses "1,$". Any delimiting <blank> characters between addresses, address separators, or address offsets shall be discarded. Command Line Parsing in ex The following symbol is used in this and following sections to describe parsing behavior: escape If a character is referred to as ``<backslash>-escaped'' or ``<control>V-escaped'', it shall mean that the character acquired or lost a special meaning by virtue of being preceded, respectively, by a <backslash> or <control>V character. Unless otherwise specified, the escaping character shall be discarded at that time and shall not be further considered for any purpose. Command-line parsing shall be done in the following steps. For each step, characters already evaluated shall be ignored; that is, the phrase ``leading character'' refers to the next character that has not yet been evaluated. 1. Leading <colon> characters shall be skipped. 2. Leading <blank> characters shall be skipped. 3. If the leading character is a double-quote character, the characters up to and including the next non-<backslash>-escaped <newline> shall be discarded, and any subsequent characters shall be parsed as a separate command. 4. Leading characters that can be interpreted as addresses shall be evaluated; see Addressing in ex. 5. Leading <blank> characters shall be skipped. 6. If the next character is a <vertical-line> character or a <newline>: a. If the next character is a <newline>: i. If ex is in open or visual mode, the current line shall be set to the last address specified, if any. ii. Otherwise, if the last command was terminated by a <vertical-line> character, no action shall be taken; for example, the command "||<newline>" shall execute two implied commands, not three. iii. Otherwise, step 6.b. shall apply. b. Otherwise, the implied command shall be the print command. The last #, p, and l flags specified to any ex command shall be remembered and shall apply to this implied command. Executing the ex number, print, or list command shall set the remembered flags to #, nothing, and l, respectively, plus any other flags specified for that execution of the number, print, or list command. If ex is not currently performing a global or v command, and no address or count is specified, the current line shall be incremented by 1 before the command is executed. If incrementing the current line would result in an address past the last line in the edit buffer, the command shall fail, and the increment shall not happen. c. The <newline> or <vertical-line> character shall be discarded and any subsequent characters shall be parsed as a separate command. 7. The command name shall be comprised of the next character (if the character is not alphabetic), or the next character and any subsequent alphabetic characters (if the character is alphabetic), with the following exceptions: a. Commands that consist of any prefix of the characters in the command name delete, followed immediately by any of the characters 'l', 'p', '+', '-', or '#' shall be interpreted as a delete command, followed by a <blank>, followed by the characters that were not part of the prefix of the delete command. The maximum number of characters shall be matched to the command name delete; for example, "del" shall not be treated as "de" followed by the flag l. b. Commands that consist of the character 'k', followed by a character that can be used as the name of a mark, shall be equivalent to the mark command followed by a <blank>, followed by the character that followed the 'k'. c. Commands that consist of the character 's', followed by characters that could be interpreted as valid options to the s command, shall be the equivalent of the s command, without any pattern or replacement values, followed by a <blank>, followed by the characters after the 's'. 8. The command name shall be matched against the possible command names, and a command name that contains a prefix matching the characters specified by the user shall be the executed command. In the case of commands where the characters specified by the user could be ambiguous, the executed command shall be as follows: a append n next t t c change p print u undo ch change pr print un undo e edit r read v v m move re read w write ma mark s s Implementation extensions with names causing similar ambiguities shall not be checked for a match until all possible matches for commands specified by POSIX.12008 have been checked. 9. If the command is a ! command, or if the command is a read command followed by zero or more <blank> characters and a !, or if the command is a write command followed by one or more <blank> characters and a !, the rest of the command shall include all characters up to a non-<backslash>-escaped <newline>. The <newline> shall be discarded and any subsequent characters shall be parsed as a separate ex command. 10. Otherwise, if the command is an edit, ex, or next command, or a visual command while in open or visual mode, the next part of the command shall be parsed as follows: a. Any '!' character immediately following the command shall be skipped and be part of the command. b. Any leading <blank> characters shall be skipped and be part of the command. c. If the next character is a '+', characters up to the first non-<backslash>-escaped <newline> or non-<backslash>-escaped <blank> shall be skipped and be part of the command. d. The rest of the command shall be determined by the steps specified in paragraph 12. 11. Otherwise, if the command is a global, open, s, or v command, the next part of the command shall be parsed as follows: a. Any leading <blank> characters shall be skipped and be part of the command. b. If the next character is not an alphanumeric, double- quote, <newline>, <backslash>, or <vertical-line> character: i. The next character shall be used as a command delimiter. ii. If the command is a global, open, or v command, characters up to the first non-<backslash>-escaped <newline>, or first non-<backslash>-escaped delimiter character, shall be skipped and be part of the command. iii. If the command is an s command, characters up to the first non-<backslash>-escaped <newline>, or second non-<backslash>-escaped delimiter character, shall be skipped and be part of the command. c. If the command is a global or v command, characters up to the first non-<backslash>-escaped <newline> shall be skipped and be part of the command. d. Otherwise, the rest of the command shall be determined by the steps specified in paragraph 12. 12. Otherwise: a. If the command was a map, unmap, abbreviate, or unabbreviate command, characters up to the first non-<control>V-escaped <newline>, <vertical-line>, or double-quote character shall be skipped and be part of the command. b. Otherwise, characters up to the first non-<backslash>-escaped <newline>, <vertical-line>, or double-quote character shall be skipped and be part of the command. c. If the command was an append, change, or insert command, and the step 12.b. ended at a <vertical-line> character, any subsequent characters, up to the next non-<backslash>-escaped <newline> shall be used as input text to the command. d. If the command was ended by a double-quote character, all subsequent characters, up to the next non-<backslash>-escaped <newline>, shall be discarded. e. The terminating <newline> or <vertical-line> character shall be discarded and any subsequent characters shall be parsed as a separate ex command. Command arguments shall be parsed as described by the Synopsis and Description of each individual ex command. This parsing shall not be <blank>-sensitive, except for the ! argument, which must follow the command name without intervening <blank> characters, and where it would otherwise be ambiguous. For example, count and flag arguments need not be <blank>-separated because "d22p" is not ambiguous, but file arguments to the ex next command must be separated by one or more <blank> characters. Any <blank> in command arguments for the abbreviate, unabbreviate, map, and unmap commands can be <control>V-escaped, in which case the <blank> shall not be used as an argument delimiter. Any <blank> in the command argument for any other command can be <backslash>-escaped, in which case that <blank> shall not be used as an argument delimiter. Within command arguments for the abbreviate, unabbreviate, map, and unmap commands, any character can be <control>V-escaped. All such escaped characters shall be treated literally and shall have no special meaning. Within command arguments for all other ex commands that are not regular expressions or replacement strings, any character that would otherwise have a special meaning can be <backslash>-escaped. Escaped characters shall be treated literally, without special meaning as shell expansion characters or '!', '%', and '#' expansion characters. See Regular Expressions in ex and Replacement Strings in ex for descriptions of command arguments that are regular expressions or replacement strings. Non-<backslash>-escaped '%' characters appearing in file arguments to any ex command shall be replaced by the current pathname; unescaped '#' characters shall be replaced by the alternate pathname. It shall be an error if '%' or '#' characters appear unescaped in an argument and their corresponding values are not set. Non-<backslash>-escaped '!' characters in the arguments to either the ex ! command or the open and visual mode ! command, or in the arguments to the ex read command, where the first non-<blank> after the command name is a '!' character, or in the arguments to the ex write command where the command name is followed by one or more <blank> characters and the first non-<blank> after the command name is a '!' character, shall be replaced with the arguments to the last of those three commands as they appeared after all unescaped '%', '#', and '!' characters were replaced. It shall be an error if '!' characters appear unescaped in one of these commands and there has been no previous execution of one of these commands. If an error occurs during the parsing or execution of an ex command: * An informational message to this effect shall be written. Execution of the ex command shall stop, and the cursor (for example, the current line and column) shall not be further modified. * If the ex command resulted from a map expansion, all characters from that map expansion shall be discarded, except as otherwise specified by the map command. * Otherwise, if the ex command resulted from the processing of an EXINIT environment variable, a .exrc file, a :source command, a -c option, or a +command specified to an ex edit, ex, next, or visual command, no further commands from the source of the commands shall be executed. * Otherwise, if the ex command resulted from the execution of a buffer or a global or v command, no further commands caused by the execution of the buffer or the global or v command shall be executed. * Otherwise, if the ex command was not terminated by a <newline>, all characters up to and including the next non-<backslash>-escaped <newline> shall be discarded. Input Editing in ex The following symbol is used in this and the following sections to specify command actions: word In the POSIX locale, a word consists of a maximal sequence of letters, digits, and underscores, delimited at both ends by characters other than letters, digits, or underscores, or by the beginning or end of a line or the edit buffer. When accepting input characters from the user, in either ex command mode or ex text input mode, ex shall enable canonical mode input processing, as defined in the System Interfaces volume of POSIX.12017. If in ex text input mode: 1. If the number edit option is set, ex shall prompt for input using the line number that would be assigned to the line if it is entered, in the format specified for the ex number command. 2. If the autoindent edit option is set, ex shall prompt for input using autoindent characters, as described by the autoindent edit option. autoindent characters shall follow the line number, if any. If in ex command mode: 1. If the prompt edit option is set, input shall be prompted for using a single ':' character; otherwise, there shall be no prompt. The input characters in the following sections shall have the following effects on the input line. Scroll Synopsis: eof See the description of the stty eof character in stty(1p). If in ex command mode: If the eof character is the first character entered on the line, the line shall be evaluated as if it contained two characters: a <control>D and a <newline>. Otherwise, the eof character shall have no special meaning. If in ex text input mode: If the cursor follows an autoindent character, the autoindent characters in the line shall be modified so that a part of the next text input character will be displayed on the first column in the line after the previous shiftwidth edit option column boundary, and the user shall be prompted again for input for the same line. Otherwise, if the cursor follows a '0', which follows an autoindent character, and the '0' was the previous text input character, the '0' and all autoindent characters in the line shall be discarded, and the user shall be prompted again for input for the same line. Otherwise, if the cursor follows a '^', which follows an autoindent character, and the '^' was the previous text input character, the '^' and all autoindent characters in the line shall be discarded, and the user shall be prompted again for input for the same line. In addition, the autoindent level for the next input line shall be derived from the same line from which the autoindent level for the current input line was derived. Otherwise, if there are no autoindent or text input characters in the line, the eof character shall be discarded. Otherwise, the eof character shall have no special meaning. <newline> Synopsis: <newline> <control>-J If in ex command mode: Cause the command line to be parsed; <control>J shall be mapped to the <newline> for this purpose. If in ex text input mode: Terminate the current line. If there are no characters other than autoindent characters on the line, all characters on the line shall be discarded. Prompt for text input on a new line after the current line. If the autoindent edit option is set, an appropriate number of autoindent characters shall be added as a prefix to the line as described by the ex autoindent edit option. <backslash> Synopsis: <backslash> Allow the entry of a subsequent <newline> or <control>J as a literal character, removing any special meaning that it may have to the editor during text input mode. The <backslash> character shall be retained and evaluated when the command line is parsed, or retained and included when the input text becomes part of the edit buffer. <control>V Synopsis: <control>-V Allow the entry of any subsequent character as a literal character, removing any special meaning that it may have to the editor during text input mode. The <control>V character shall be discarded before the command line is parsed or the input text becomes part of the edit buffer. If the ``literal next'' functionality is performed by the underlying system, it is implementation-defined whether a character other than <control>V performs this function. <control>W Synopsis: <control>-W Discard the <control>W, and the word previous to it in the input line, including any <blank> characters following the word and preceding the <control>W. If the ``word erase'' functionality is performed by the underlying system, it is implementation- defined whether a character other than <control>W performs this function. Command Descriptions in ex The following symbols are used in this section to represent command modifiers. Some of these modifiers can be omitted, in which case the specified defaults shall be used. 1addr A single line address, given in any of the forms described in Addressing in ex; the default shall be the current line ('.'), unless otherwise specified. If the line address is zero, it shall be an error, unless otherwise specified in the following command descriptions. If the edit buffer is empty, and the address is specified with a command other than =, append, insert, open, put, read, or visual, or the address is not zero, it shall be an error. 2addr Two addresses specifying an inclusive range of lines. If no addresses are specified, the default for 2addr shall be the current line only (".,."), unless otherwise specified in the following command descriptions. If one address is specified, 2addr shall specify that line only, unless otherwise specified in the following command descriptions. It shall be an error if the first address is greater than the second address. If the edit buffer is empty, and the two addresses are specified with a command other than the !, write, wq, or xit commands, or either address is not zero, it shall be an error. count A positive decimal number. If count is specified, it shall be equivalent to specifying an additional address to the command, unless otherwise specified by the following command descriptions. The additional address shall be equal to the last address specified to the command (either explicitly or by default) plus count-1. If this would result in an address greater than the last line of the edit buffer, it shall be corrected to equal the last line of the edit buffer. flags One or more of the characters '+', '-', '#', 'p', or 'l' (ell). The flag characters can be <blank>-separated, and in any order or combination. The characters '#', 'p', and 'l' shall cause lines to be written in the format specified by the print command with the specified flags. The lines to be written are as follows: 1. All edit buffer lines written during the execution of the ex &, ~, list, number, open, print, s, visual, and z commands shall be written as specified by flags. 2. After the completion of an ex command with a flag as an argument, the current line shall be written as specified by flags, unless the current line was the last line written by the command. The characters '+' and '-' cause the value of the current line after the execution of the ex command to be adjusted by the offset address as described in Addressing in ex. This adjustment shall occur before the current line is written as described in 2. above. The default for flags shall be none. buffer One of a number of named areas for holding text. The named buffers are specified by the alphanumeric characters of the POSIX locale. There shall also be one ``unnamed'' buffer. When no buffer is specified for editor commands that use a buffer, the unnamed buffer shall be used. Commands that store text into buffers shall store the text as it was before the command took effect, and shall store text occurring earlier in the file before text occurring later in the file, regardless of how the text region was specified. Commands that store text into buffers shall store the text into the unnamed buffer as well as any specified buffer. In ex commands, buffer names are specified as the name by itself. In open or visual mode commands the name is preceded by a double-quote ('"') character. If the specified buffer name is an uppercase character, and the buffer contents are to be modified, the buffer shall be appended to rather than being overwritten. If the buffer is not being modified, specifying the buffer name in lowercase and uppercase shall have identical results. There shall also be buffers named by the numbers 1 through 9. In open and visual mode, if a region of text including characters from more than a single line is being modified by the vi c or d commands, the motion character associated with the c or d commands specifies that the buffer text shall be in line mode, or the commands %, `, /, ?, (, ), N, n, {, or } are used to define a region of text for the c or d commands, the contents of buffers 1 through 8 shall be moved into the buffer named by the next numerically greater value, the contents of buffer 9 shall be discarded, and the region of text shall be copied into buffer 1. This shall be in addition to copying the text into a user-specified buffer or unnamed buffer, or both. Numeric buffers can be specified as a source buffer for open and visual mode commands; however, specifying a numeric buffer as the write target of an open or visual mode command shall have unspecified results. The text of each buffer shall have the characteristic of being in either line or character mode. Appending text to a non-empty buffer shall set the mode to match the characteristic of the text being appended. Appending text to a buffer shall cause the creation of at least one additional line in the buffer. All text stored into buffers by ex commands shall be in line mode. The ex commands that use buffers as the source of text specify individually how buffers of different modes are handled. Each open or visual mode command that uses buffers for any purpose specifies individually the mode of the text stored into the buffer and how buffers of different modes are handled. file Command text used to derive a pathname. The default shall be the current pathname, as defined previously, in which case, if no current pathname has yet been established it shall be an error, except where specifically noted in the individual command descriptions that follow. If the command text contains any of the characters '~', '{', '[', '*', '?', '$', '"', backquote, single-quote, and <backslash>, it shall be subjected to the process of ``shell expansions'', as described below; if more than a single pathname results and the command expects only one, it shall be an error. The process of shell expansions in the editor shall be done as follows. The ex utility shall pass two arguments to the program named by the shell edit option; the first shall be -c, and the second shall be the string "echo" and the command text as a single argument. The standard output and standard error of that command shall replace the command text. ! A character that can be appended to the command name to modify its operation, as detailed in the individual command descriptions. With the exception of the ex read, write, and ! commands, the '!' character shall only act as a modifier if there are no <blank> characters between it and the command name. remembered search direction The vi commands N and n begin searching in a forwards or backwards direction in the edit buffer based on a remembered search direction, which is initially unset, and is set by the ex global, v, s, and tag commands, and the vi / and ? commands. Abbreviate Synopsis: ab[breviate][lhs rhs] If lhs and rhs are not specified, write the current list of abbreviations and do nothing more. Implementations may restrict the set of characters accepted in lhs or rhs, except that printable characters and <blank> characters shall not be restricted. Additional restrictions shall be implementation-defined. In both lhs and rhs, any character may be escaped with a <control>V, in which case the character shall not be used to delimit lhs from rhs, and the escaping <control>V shall be discarded. In open and visual text input mode, if a non-word or <ESC> character that is not escaped by a <control>V character is entered after a word character, a check shall be made for a set of characters matching lhs, in the text input entered during this command. If it is found, the effect shall be as if rhs was entered instead of lhs. The set of characters that are checked is defined as follows: 1. If there are no characters inserted before the word and non- word or <ESC> characters that triggered the check, the set of characters shall consist of the word character. 2. If the character inserted before the word and non-word or <ESC> characters that triggered the check is a word character, the set of characters shall consist of the characters inserted immediately before the triggering characters that are word characters, plus the triggering word character. 3. If the character inserted before the word and non-word or <ESC> characters that triggered the check is not a word character, the set of characters shall consist of the characters that were inserted before the triggering characters that are neither <blank> characters nor word characters, plus the triggering word character. It is unspecified whether the lhs argument entered for the ex abbreviate and unabbreviate commands is replaced in this fashion. Regardless of whether or not the replacement occurs, the effect of the command shall be as if the replacement had not occurred. Current line: Unchanged. Current column: Unchanged. Append Synopsis: [1addr] a[ppend][!] Enter ex text input mode; the input text shall be placed after the specified line. If line zero is specified, the text shall be placed at the beginning of the edit buffer. This command shall be affected by the number and autoindent edit options; following the command name with '!' shall cause the autoindent edit option setting to be toggled for the duration of this command only. Current line: Set to the last input line; if no lines were input, set to the specified line, or to the first line of the edit buffer if a line of zero was specified, or zero if the edit buffer is empty. Current column: Set to non-<blank>. Arguments Synopsis: ar[gs] Write the current argument list, with the current argument-list entry, if any, between '[' and ']' characters. Current line: Unchanged. Current column: Unchanged. Change Synopsis: [2addr] c[hange][!][count] Enter ex text input mode; the input text shall replace the specified lines. The specified lines shall be copied into the unnamed buffer, which shall become a line mode buffer. This command shall be affected by the number and autoindent edit options; following the command name with '!' shall cause the autoindent edit option setting to be toggled for the duration of this command only. Current line: Set to the last input line; if no lines were input, set to the line before the first address, or to the first line of the edit buffer if there are no lines preceding the first address, or to zero if the edit buffer is empty. Current column: Set to non-<blank>. Change Directory Synopsis: chd[ir][!][directory] cd[!][directory] Change the current working directory to directory. If no directory argument is specified, and the HOME environment variable is set to a non-null and non-empty value, directory shall default to the value named in the HOME environment variable. If the HOME environment variable is empty or is undefined, the default value of directory is implementation- defined. If no '!' is appended to the command name, and the edit buffer has been modified since the last complete write, and the current pathname does not begin with a '/', it shall be an error. Current line: Unchanged. Current column: Unchanged. Copy Synopsis: [2addr] co[py] 1addr [flags] [2addr] t 1addr [flags] Copy the specified lines after the specified destination line; line zero specifies that the lines shall be placed at the beginning of the edit buffer. Current line: Set to the last line copied. Current column: Set to non-<blank>. Delete Synopsis: [2addr] d[elete][buffer][count][flags] Delete the specified lines into a buffer (defaulting to the unnamed buffer), which shall become a line-mode buffer. Flags can immediately follow the command name; see Command Line Parsing in ex. Current line: Set to the line following the deleted lines, or to the last line in the edit buffer if that line is past the end of the edit buffer, or to zero if the edit buffer is empty. Current column: Set to non-<blank>. Edit Synopsis: e[dit][!][+command][file] ex[!][+command][file] If no '!' is appended to the command name, and the edit buffer has been modified since the last complete write, it shall be an error. If file is specified, replace the current contents of the edit buffer with the current contents of file, and set the current pathname to file. If file is not specified, replace the current contents of the edit buffer with the current contents of the file named by the current pathname. If for any reason the current contents of the file cannot be accessed, the edit buffer shall be empty. The +command option shall be <blank>-delimited; <blank> characters within the +command can be escaped by preceding them with a <backslash> character. The +command shall be interpreted as an ex command immediately after the contents of the edit buffer have been replaced and the current line and column have been set. If the edit buffer is empty: Current line: Set to 0. Current column: Set to 1. Otherwise, if executed while in ex command mode or if the +command argument is specified: Current line: Set to the last line of the edit buffer. Current column: Set to non-<blank>. Otherwise, if file is omitted or results in the current pathname: Current line: Set to the first line of the edit buffer. Current column: Set to non-<blank>. Otherwise, if file is the same as the last file edited, the line and column shall be set as follows; if the file was previously edited, the line and column may be set as follows: Current line: Set to the last value held when that file was last edited. If this value is not a valid line in the new edit buffer, set to the first line of the edit buffer. Current column: If the current line was set to the last value held when the file was last edited, set to the last value held when the file was last edited. Otherwise, or if the last value is not a valid column in the new edit buffer, set to non-<blank>. Otherwise: Current line: Set to the first line of the edit buffer. Current column: Set to non-<blank>. File Synopsis: f[ile][file] If a file argument is specified, the alternate pathname shall be set to the current pathname, and the current pathname shall be set to file. Write an informational message. If the file has a current pathname, it shall be included in this message; otherwise, the message shall indicate that there is no current pathname. If the edit buffer contains lines, the current line number and the number of lines in the edit buffer shall be included in this message; otherwise, the message shall indicate that the edit buffer is empty. If the edit buffer has been modified since the last complete write, this fact shall be included in this message. If the readonly edit option is set, this fact shall be included in this message. The message may contain other unspecified information. Current line: Unchanged. Current column: Unchanged. Global Synopsis: [2addr] g[lobal] /pattern/ [commands] [2addr] v /pattern/ [commands] The optional '!' character after the global command shall be the same as executing the v command. If pattern is empty (for example, "//") or not specified, the last regular expression used in the editor command shall be used as the pattern. The pattern can be delimited by <slash> characters (shown in the Synopsis), as well as any non- alphanumeric or non-<blank> other than <backslash>, <vertical- line>, <newline>, or double-quote. If no lines are specified, the lines shall default to the entire file. The global and v commands are logically two-pass operations. First, mark the lines within the specified lines for which the line excluding the terminating <newline> matches (global) or does not match (v or global!) the specified pattern. Second, execute the ex commands given by commands, with the current line ('.') set to each marked line. If an error occurs during this process, or the contents of the edit buffer are replaced (for example, by the ex :edit command) an error message shall be written and no more commands resulting from the execution of this command shall be processed. Multiple ex commands can be specified by entering multiple commands on a single line using a <vertical-line> to delimit them, or one per line, by escaping each <newline> with a <backslash>. If no commands are specified: 1. If in ex command mode, it shall be as if the print command were specified. 2. Otherwise, no command shall be executed. For the append, change, and insert commands, the input text shall be included as part of the command, and the terminating <period> can be omitted if the command ends the list of commands. The open and visual commands can be specified as one of the commands, in which case each marked line shall cause the editor to enter open or visual mode. If open or visual mode is exited using the vi Q command, the current line shall be set to the next marked line, and open or visual mode reentered, until the list of marked lines is exhausted. The global, v, and undo commands cannot be used in commands. Marked lines may be deleted by commands executed for lines occurring earlier in the file than the marked lines. In this case, no commands shall be executed for the deleted lines. If the remembered search direction is not set, the global and v commands shall set it to forward. The autoprint and autoindent edit options shall be inhibited for the duration of the g or v command. Current line: If no commands executed, set to the last marked line. Otherwise, as specified for the executed ex commands. Current column: If no commands are executed, set to non-<blank>; otherwise, as specified for the individual ex commands. Insert Synopsis: [1addr] i[nsert][!] Enter ex text input mode; the input text shall be placed before the specified line. If the line is zero or 1, the text shall be placed at the beginning of the edit buffer. This command shall be affected by the number and autoindent edit options; following the command name with '!' shall cause the autoindent edit option setting to be toggled for the duration of this command only. Current line: Set to the last input line; if no lines were input, set to the line before the specified line, or to the first line of the edit buffer if there are no lines preceding the specified line, or zero if the edit buffer is empty. Current column: Set to non-<blank>. Join Synopsis: [2addr] j[oin][!][count][flags] If count is specified: If no address was specified, the join command shall behave as if 2addr were the current line and the current line plus count (.,. + count). If one address was specified, the join command shall behave as if 2addr were the specified address and the specified address plus count (addr,addr + count). If two addresses were specified, the join command shall behave as if an additional address, equal to the last address plus count -1 (addr1,addr2,addr2 + count -1), was specified. If this would result in a second address greater than the last line of the edit buffer, it shall be corrected to be equal to the last line of the edit buffer. If no count is specified: If no address was specified, the join command shall behave as if 2addr were the current line and the next line (.,. +1). If one address was specified, the join command shall behave as if 2addr were the specified address and the next line (addr,addr +1). Join the text from the specified lines together into a single line, which shall replace the specified lines. If a '!' character is appended to the command name, the join shall be without modification of any line, independent of the current locale. Otherwise, in the POSIX locale, set the current line to the first of the specified lines, and then, for each subsequent line, proceed as follows: 1. Discard leading <space> characters from the line to be joined. 2. If the line to be joined is now empty, delete it, and skip steps 3 through 5. 3. If the current line ends in a <blank>, or the first character of the line to be joined is a ')' character, join the lines without further modification. 4. If the last character of the current line is a '.', join the lines with two <space> characters between them. 5. Otherwise, join the lines with a single <space> between them. Current line: Set to the first line specified. Current column: Set to non-<blank>. List Synopsis: [2addr] l[ist][count][flags] This command shall be equivalent to the ex command: [2addr] p[rint][count] l[flags] See Print. Map Synopsis: map[!][lhs rhs] If lhs and rhs are not specified: 1. If '!' is specified, write the current list of text input mode maps. 2. Otherwise, write the current list of command mode maps. 3. Do nothing more. Implementations may restrict the set of characters accepted in lhs or rhs, except that printable characters and <blank> characters shall not be restricted. Additional restrictions shall be implementation-defined. In both lhs and rhs, any character can be escaped with a <control>V, in which case the character shall not be used to delimit lhs from rhs, and the escaping <control>V shall be discarded. If the character '!' is appended to the map command name, the mapping shall be effective during open or visual text input mode rather than open or visual command mode. This allows lhs to have two different map definitions at the same time: one for command mode and one for text input mode. For command mode mappings: When the lhs is entered as any part of a vi command in open or visual mode (but not as part of the arguments to the command), the action shall be as if the corresponding rhs had been entered. If any character in the command, other than the first, is escaped using a <control>V character, that character shall not be part of a match to an lhs. It is unspecified whether implementations shall support map commands where the lhs is more than a single character in length, where the first character of the lhs is printable. If lhs contains more than one character and the first character is '#', followed by a sequence of digits corresponding to a numbered function key, then when this function key is typed it shall be mapped to rhs. Characters other than digits following a '#' character also represent the function key named by the characters in the lhs following the '#' and may be mapped to rhs. It is unspecified how function keys are named or what function keys are supported. For text input mode mappings: When the lhs is entered as any part of text entered in open or visual text input modes, the action shall be as if the corresponding rhs had been entered. If any character in the input text is escaped using a <control>V character, that character shall not be part of a match to an lhs. It is unspecified whether the lhs text entered for subsequent map or unmap commands is replaced with the rhs text for the purposes of the screen display; regardless of whether or not the display appears as if the corresponding rhs text was entered, the effect of the command shall be as if the lhs text was entered. If only part of the lhs is entered, it is unspecified how long the editor will wait for additional, possibly matching characters before treating the already entered characters as not matching the lhs. The rhs characters shall themselves be subject to remapping, unless otherwise specified by the remap edit option, except that if the characters in lhs occur as prefix characters in rhs, those characters shall not be remapped. On block-mode terminals, the mapping need not occur immediately (for example, it may occur after the terminal transmits a group of characters to the system), but it shall achieve the same results as if it occurred immediately. Current line: Unchanged. Current column: Unchanged. Mark Synopsis: [1addr] ma[rk] character [1addr] k character Implementations shall support character values of a single lowercase letter of the POSIX locale and the backquote and single-quote characters; support of other characters is implementation-defined. If executing the vi m command, set the specified mark to the current line and 1-based numbered character referenced by the current column, if any; otherwise, column position 1. Otherwise, set the specified mark to the specified line and 1-based numbered first non-<blank> non-<newline> in the line, if any; otherwise, the last non-<newline> in the line, if any; otherwise, column position 1. The mark shall remain associated with the line until the mark is reset or the line is deleted. If a deleted line is restored by a subsequent undo command, any marks previously associated with the line, which have not been reset, shall be restored as well. Any use of a mark not associated with a current line in the edit buffer shall be an error. The marks ` and ' shall be set as described previously, immediately before the following events occur in the editor: 1. The use of '$' as an ex address 2. The use of a positive decimal number as an ex address 3. The use of a search command as an ex address 4. The use of a mark reference as an ex address 5. The use of the following open and visual mode commands: <control>], %, (, ), [, ], {, } 6. The use of the following open and visual mode commands: ', G, H, L, M, z if the current line will change as a result of the command 7. The use of the open and visual mode commands: /, ?, N, `, n if the current line or column will change as a result of the command 8. The use of the ex mode commands: z, undo, global, v For rules 1., 2., 3., and 4., the ` and ' marks shall not be set if the ex command is parsed as specified by rule 6.a. in Command Line Parsing in ex. For rules 5., 6., and 7., the ` and ' marks shall not be set if the commands are used as motion commands in open and visual mode. For rules 1., 2., 3., 4., 5., 6., 7., and 8., the ` and ' marks shall not be set if the command fails. The ` and ' marks shall be set as described previously, each time the contents of the edit buffer are replaced (including the editing of the initial buffer), if in open or visual mode, or if in ex mode and the edit buffer is not empty, before any commands or movements (including commands or movements specified by the -c or -t options or the +command argument) are executed on the edit buffer. If in open or visual mode, the marks shall be set as if executing the vi m command; otherwise, as if executing the ex mark command. When changing from ex mode to open or visual mode, if the ` and ' marks are not already set, the ` and ' marks shall be set as described previously. Current line: Unchanged. Current column: Unchanged. Move Synopsis: [2addr] m[ove] 1addr [flags] Move the specified lines after the specified destination line. A destination of line zero specifies that the lines shall be placed at the beginning of the edit buffer. It shall be an error if the destination line is within the range of lines to be moved. Current line: Set to the last of the moved lines. Current column: Set to non-<blank>. Next Synopsis: n[ext][!][+command][file ...] If no '!' is appended to the command name, and the edit buffer has been modified since the last complete write, it shall be an error, unless the file is successfully written as specified by the autowrite option. If one or more files is specified: 1. Set the argument list to the specified filenames. 2. Set the current argument list reference to be the first entry in the argument list. 3. Set the current pathname to the first filename specified. Otherwise: 1. It shall be an error if there are no more filenames in the argument list after the filename currently referenced. 2. Set the current pathname and the current argument list reference to the filename after the filename currently referenced in the argument list. Replace the contents of the edit buffer with the contents of the file named by the current pathname. If for any reason the contents of the file cannot be accessed, the edit buffer shall be empty. This command shall be affected by the autowrite and writeany edit options. The +command option shall be <blank>-delimited; <blank> characters can be escaped by preceding them with a <backslash> character. The +command shall be interpreted as an ex command immediately after the contents of the edit buffer have been replaced and the current line and column have been set. Current line: Set as described for the edit command. Current column: Set as described for the edit command. Number Synopsis: [2addr] nu[mber][count][flags] [2addr] #[count][flags] These commands shall be equivalent to the ex command: [2addr] p[rint][count] #[flags] See Print. Open Synopsis: [1addr] o[pen] /pattern/ [flags] This command need not be supported on block-mode terminals or terminals with insufficient capabilities. If standard input, standard output, or standard error are not terminal devices, the results are unspecified. Enter open mode. The trailing delimiter can be omitted from pattern at the end of the command line. If pattern is empty (for example, "//") or not specified, the last regular expression used in the editor shall be used as the pattern. The pattern can be delimited by <slash> characters (shown in the Synopsis), as well as any alphanumeric, or non-<blank> other than <backslash>, <vertical-line>, <newline>, or double-quote. Current line: Set to the specified line. Current column: Set to non-<blank>. Preserve Synopsis: pre[serve] Save the edit buffer in a form that can later be recovered by using the -r option or by using the ex recover command. After the file has been preserved, a mail message shall be sent to the user. This message shall be readable by invoking the mailx utility. The message shall contain the name of the file, the time of preservation, and an ex command that could be used to recover the file. Additional information may be included in the mail message. Current line: Unchanged. Current column: Unchanged. Print Synopsis: [2addr] p[rint][count][flags] Write the addressed lines. The behavior is unspecified if the number of columns on the display is less than the number of columns required to write any single character in the lines being written. Non-printable characters, except for the <tab>, shall be written as implementation-defined multi-character sequences. If the # flag is specified or the number edit option is set, each line shall be preceded by its line number in the following format: "%6d ", <line number> If the l flag is specified or the list edit option is set: 1. The characters listed in the Base Definitions volume of POSIX.12017, Table 5-1, Escape Sequences and Associated Actions shall be written as the corresponding escape sequence. 2. Non-printable characters not in the Base Definitions volume of POSIX.12017, Table 5-1, Escape Sequences and Associated Actions shall be written as one three-digit octal number (with a preceding <backslash>) for each byte in the character (most significant byte first). 3. The end of each line shall be marked with a '$', and literal '$' characters within the line shall be written with a preceding <backslash>. Long lines shall be folded; the length at which folding occurs is unspecified, but should be appropriate for the output terminal, considering the number of columns of the terminal. If a line is folded, and the l flag is not specified and the list edit option is not set, it is unspecified whether a multi-column character at the folding position is separated; it shall not be discarded. Current line: Set to the last written line. Current column: Unchanged if the current line is unchanged; otherwise, set to non-<blank>. Put Synopsis: [1addr] pu[t][buffer] Append text from the specified buffer (by default, the unnamed buffer) to the specified line; line zero specifies that the text shall be placed at the beginning of the edit buffer. Each portion of a line in the buffer shall become a new line in the edit buffer, regardless of the mode of the buffer. Current line: Set to the last line entered into the edit buffer. Current column: Set to non-<blank>. Quit Synopsis: q[uit][!] If no '!' is appended to the command name: 1. If the edit buffer has been modified since the last complete write, it shall be an error. 2. If there are filenames in the argument list after the filename currently referenced, and the last command was not a quit, wq, xit, or ZZ (see Exit) command, it shall be an error. Otherwise, terminate the editing session. Read Synopsis: [1addr] r[ead][!][file] If '!' is not the first non-<blank> to follow the command name, a copy of the specified file shall be appended into the edit buffer after the specified line; line zero specifies that the copy shall be placed at the beginning of the edit buffer. The number of lines and bytes read shall be written. If no file is named, the current pathname shall be the default. If there is no current pathname, then file shall become the current pathname. If there is no current pathname or file operand, it shall be an error. Specifying a file that is not of type regular shall have unspecified results. Otherwise, if file is preceded by '!', the rest of the line after the '!' shall have '%', '#', and '!' characters expanded as described in Command Line Parsing in ex. The ex utility shall then pass two arguments to the program named by the shell edit option; the first shall be -c and the second shall be the expanded arguments to the read command as a single argument. The standard input of the program shall be set to the standard input of the ex program when it was invoked. The standard error and standard output of the program shall be appended into the edit buffer after the specified line. Each line in the copied file or program output (as delimited by <newline> characters or the end of the file or output if it is not immediately preceded by a <newline>), shall be a separate line in the edit buffer. Any occurrences of <carriage-return> and <newline> pairs in the output shall be treated as single <newline> characters. The special meaning of the '!' following the read command can be overridden by escaping it with a <backslash> character. Current line: If no lines are added to the edit buffer, unchanged. Otherwise, if in open or visual mode, set to the first line entered into the edit buffer. Otherwise, set to the last line entered into the edit buffer. Current column: Set to non-<blank>. Recover Synopsis: rec[over][!] file If no '!' is appended to the command name, and the edit buffer has been modified since the last complete write, it shall be an error. If no file operand is specified, then the current pathname shall be used. If there is no current pathname or file operand, it shall be an error. If no recovery information has previously been saved about file, the recover command shall behave identically to the edit command, and an informational message to this effect shall be written. Otherwise, set the current pathname to file, and replace the current contents of the edit buffer with the recovered contents of file. If there are multiple instances of the file to be recovered, the one most recently saved shall be recovered, and an informational message that there are previous versions of the file that can be recovered shall be written. The editor shall behave as if the contents of the edit buffer have already been modified. Current file: Set as described for the edit command. Current column: Set as described for the edit command. Rewind Synopsis: rew[ind][!] If no '!' is appended to the command name, and the edit buffer has been modified since the last complete write, it shall be an error, unless the file is successfully written as specified by the autowrite option. If the argument list is empty, it shall be an error. The current argument list reference and the current pathname shall be set to the first filename in the argument list. Replace the contents of the edit buffer with the contents of the file named by the current pathname. If for any reason the contents of the file cannot be accessed, the edit buffer shall be empty. This command shall be affected by the autowrite and writeany edit options. Current line: Set as described for the edit command. Current column: Set as described for the edit command. Set Synopsis: se[t][option[=[value]] ...][nooption ...][option? ...][all] When no arguments are specified, write the value of the term edit option and those options whose values have been changed from the default settings; when the argument all is specified, write all of the option values. Giving an option name followed by the character '?' shall cause the current value of that option to be written. The '?' can be separated from the option name by zero or more <blank> characters. The '?' shall be necessary only for Boolean valued options. Boolean options can be given values by the form set option to turn them on or set nooption to turn them off; string and numeric options can be assigned by the form set option=value. Any <blank> characters in strings can be included as is by preceding each <blank> with an escaping <backslash>. More than one option can be set or listed by a single set command by specifying multiple arguments, each separated from the next by one or more <blank> characters. See Edit Options in ex for details about specific options. Current line: Unchanged. Current column: Unchanged. Shell Synopsis: sh[ell] Invoke the program named in the shell edit option with the single argument -i (interactive mode). Editing shall be resumed when the program exits. Current line: Unchanged. Current column: Unchanged. Source Synopsis: so[urce] file Read and execute ex commands from file. Lines in the file that are blank lines shall be ignored. Current line: As specified for the individual ex commands. Current column: As specified for the individual ex commands. Substitute Synopsis: [2addr] s[ubstitute][/pattern/repl/[options][count][flags]] [2addr] &[options][count][flags]] [2addr] ~[options][count][flags]] Replace the first instance of the pattern pattern by the string repl on each specified line. (See Regular Expressions in ex and Replacement Strings in ex.) Any non-alphabetic, non-<blank> delimiter other than <backslash>, '|', <newline>, or double-quote can be used instead of '/'. <backslash> characters can be used to escape delimiters, <backslash> characters, and other special characters. The trailing delimiter can be omitted from pattern or from repl at the end of the command line. If both pattern and repl are not specified or are empty (for example, "//"), the last s command shall be repeated. If only pattern is not specified or is empty, the last regular expression used in the editor shall be used as the pattern. If only repl is not specified or is empty, the pattern shall be replaced by nothing. If the entire replacement pattern is '%', the last replacement pattern to an s command shall be used. Entering a <carriage-return> in repl (which requires an escaping <backslash> in ex mode and an escaping <control>V in open or vi mode) shall split the line at that point, creating a new line in the edit buffer. The <carriage-return> shall be discarded. If options includes the letter 'g' (global), all non-overlapping instances of the pattern in the line shall be replaced. If options includes the letter 'c' (confirm), then before each substitution the line shall be written; the written line shall reflect all previous substitutions. On the following line, <space> characters shall be written beneath the characters from the line that are before the pattern to be replaced, and '^' characters written beneath the characters included in the pattern to be replaced. The ex utility shall then wait for a response from the user. An affirmative response shall cause the substitution to be done, while any other input shall not make the substitution. An affirmative response shall consist of a line with the affirmative response (as defined by the current locale) at the beginning of the line. This line shall be subject to editing in the same way as the ex command line. If interrupted (see the ASYNCHRONOUS EVENTS section), any modifications confirmed by the user shall be preserved in the edit buffer after the interrupt. If the remembered search direction is not set, the s command shall set it to forward. In the second Synopsis, the & command shall repeat the previous substitution, as if the & command were replaced by: s/pattern/repl/ where pattern and repl are as specified in the previous s, &, or ~ command. In the third Synopsis, the ~ command shall repeat the previous substitution, as if the '~' were replaced by: s/pattern/repl/ where pattern shall be the last regular expression specified to the editor, and repl shall be from the previous substitution (including & and ~) command. These commands shall be affected by the LC_MESSAGES environment variable. Current line: Set to the last line in which a substitution occurred, or, unchanged if no substitution occurred. Current column: Set to non-<blank>. Suspend Synopsis: su[spend][!] st[op][!] Allow control to return to the invoking process; ex shall suspend itself as if it had received the SIGTSTP signal. The suspension shall occur only if job control is enabled in the invoking shell (see the description of set -m). These commands shall be affected by the autowrite and writeany edit options. The current susp character (see stty(1p)) shall be equivalent to the suspend command. Tag Synopsis: ta[g][!] tagstring The results are unspecified if the format of a tags file is not as specified by the ctags utility (see ctags(1p)) description. The tag command shall search for tagstring in the tag files referred to by the tag edit option, in the order they are specified, until a reference to tagstring is found. Files shall be searched from beginning to end. If no reference is found, it shall be an error and an error message to this effect shall be written. If the reference is not found, or if an error occurs while processing a file referred to in the tag edit option, it shall be an error, and an error message shall be written at the first occurrence of such an error. Otherwise, if the tags file contained a pattern, the pattern shall be treated as a regular expression used in the editor; for example, for the purposes of the s command. If the tagstring is in a file with a different name than the current pathname, set the current pathname to the name of that file, and replace the contents of the edit buffer with the contents of that file. In this case, if no '!' is appended to the command name, and the edit buffer has been modified since the last complete write, it shall be an error, unless the file is successfully written as specified by the autowrite option. This command shall be affected by the autowrite, tag, taglength, and writeany edit options. Current line: If the tags file contained a line number, set to that line number. If the line number is larger than the last line in the edit buffer, an error message shall be written and the current line shall be set as specified for the edit command. If the tags file contained a pattern, set to the first occurrence of the pattern in the file. If no matching pattern is found, an error message shall be written and the current line shall be set as specified for the edit command. Current column: If the tags file contained a line-number reference and that line-number was not larger than the last line in the edit buffer, or if the tags file contained a pattern and that pattern was found, set to non-<blank>. Otherwise, set as specified for the edit command. Unabbreviate Synopsis: una[bbrev] lhs If lhs is not an entry in the current list of abbreviations (see Abbreviate), it shall be an error. Otherwise, delete lhs from the list of abbreviations. Current line: Unchanged. Current column: Unchanged. Undo Synopsis: u[ndo] Reverse the changes made by the last command that modified the contents of the edit buffer, including undo. For this purpose, the global, v, open, and visual commands, and commands resulting from buffer executions and mapped character expansions, are considered single commands. If no action that can be undone preceded the undo command, it shall be an error. If the undo command restores lines that were marked, the mark shall also be restored unless it was reset subsequent to the deletion of the lines. Current line: 1. If lines are added or changed in the file, set to the first line added or changed. 2. Set to the line before the first line deleted, if it exists. 3. Set to 1 if the edit buffer is not empty. 4. Set to zero. Current column: Set to non-<blank>. Unmap Synopsis: unm[ap][!] lhs If '!' is appended to the command name, and if lhs is not an entry in the list of text input mode map definitions, it shall be an error. Otherwise, delete lhs from the list of text input mode map definitions. If no '!' is appended to the command name, and if lhs is not an entry in the list of command mode map definitions, it shall be an error. Otherwise, delete lhs from the list of command mode map definitions. Current line: Unchanged. Current column: Unchanged. Version Synopsis: ve[rsion] Write a message containing version information for the editor. The format of the message is unspecified. Current line: Unchanged. Current column: Unchanged. Visual Synopsis: [1addr] vi[sual][type][count][flags] If ex is currently in open or visual mode, the Synopsis and behavior of the visual command shall be the same as the edit command, as specified by Edit. Otherwise, this command need not be supported on block-mode terminals or terminals with insufficient capabilities. If standard input, standard output, or standard error are not terminal devices, the results are unspecified. If count is specified, the value of the window edit option shall be set to count (as described in window). If the '^' type character was also specified, the window edit option shall be set before being used by the type character. Enter visual mode. If type is not specified, it shall be as if a type of '+' was specified. The type shall cause the following effects: + Place the beginning of the specified line at the top of the display. - Place the end of the specified line at the bottom of the display. . Place the beginning of the specified line in the middle of the display. ^ If the specified line is less than or equal to the value of the window edit option, set the line to 1; otherwise, decrement the line by the value of the window edit option minus 1. Place the beginning of this line as close to the bottom of the displayed lines as possible, while still displaying the value of the window edit option number of lines. Current line: Set to the specified line. Current column: Set to non-<blank>. Write Synopsis: [2addr] w[rite][!][>>][file] [2addr] w[rite][!][file] [2addr] wq[!][>>][file] If no lines are specified, the lines shall default to the entire file. The command wq shall be equivalent to a write command followed by a quit command; wq! shall be equivalent to write! followed by quit. In both cases, if the write command fails, the quit shall not be attempted. If the command name is not followed by one or more <blank> characters, or file is not preceded by a '!' character, the write shall be to a file. 1. If the >> argument is specified, and the file already exists, the lines shall be appended to the file instead of replacing its contents. If the >> argument is specified, and the file does not already exist, it is unspecified whether the write shall proceed as if the >> argument had not been specified or if the write shall fail. 2. If the readonly edit option is set (see readonly), the write shall fail. 3. If file is specified, and is not the current pathname, and the file exists, the write shall fail. 4. If file is not specified, the current pathname shall be used. If there is no current pathname, the write command shall fail. 5. If the current pathname is used, and the current pathname has been changed by the file or read commands, and the file exists, the write shall fail. If the write is successful, subsequent writes shall not fail for this reason (unless the current pathname is changed again). 6. If the whole edit buffer is not being written, and the file to be written exists, the write shall fail. For rules 1., 2., 3., and 5., the write can be forced by appending the character '!' to the command name. For rules 2., 3., and 5., the write can be forced by setting the writeany edit option. Additional, implementation-defined tests may cause the write to fail. If the edit buffer is empty, a file without any contents shall be written. An informational message shall be written noting the number of lines and bytes written. Otherwise, if the command is followed by one or more <blank> characters, and the file is preceded by '!', the rest of the line after the '!' shall have '%', '#', and '!' characters expanded as described in Command Line Parsing in ex. The ex utility shall then pass two arguments to the program named by the shell edit option; the first shall be -c and the second shall be the expanded arguments to the write command as a single argument. The specified lines shall be written to the standard input of the command. The standard error and standard output of the program, if any, shall be written as described for the print command. If the last character in that output is not a <newline>, a <newline> shall be written at the end of the output. The special meaning of the '!' following the write command can be overridden by escaping it with a <backslash> character. Current line: Unchanged. Current column: Unchanged. Write and Exit Synopsis: [2addr] x[it][!][file] If the edit buffer has not been modified since the last complete write, xit shall be equivalent to the quit command, or if a '!' is appended to the command name, to quit!. Otherwise, xit shall be equivalent to the wq command, or if a '!' is appended to the command name, to wq!. Current line: Unchanged. Current column: Unchanged. Yank Synopsis: [2addr] ya[nk][buffer][count] Copy the specified lines to the specified buffer (by default, the unnamed buffer), which shall become a line-mode buffer. Current line: Unchanged. Current column: Unchanged. Adjust Window Synopsis: [1addr] z[!][type ...][count][flags] If no line is specified, the current line shall be the default; if type is omitted as well, the current line value shall first be incremented by 1. If incrementing the current line would cause it to be greater than the last line in the edit buffer, it shall be an error. If there are <blank> characters between the type argument and the preceding z command name or optional '!' character, it shall be an error. If count is specified, the value of the window edit option shall be set to count (as described in window). If count is omitted, it shall default to 2 times the value of the scroll edit option, or if ! was specified, the number of lines in the display minus 1. If type is omitted, then count lines starting with the specified line shall be written. Otherwise, count lines starting with the line specified by the type argument shall be written. The type argument shall change the lines to be written. The possible values of type are as follows: - The specified line shall be decremented by the following value: (((number of '-' characters) x count) -1) If the calculation would result in a number less than 1, it shall be an error. Write lines from the edit buffer, starting at the new value of line, until count lines or the last line in the edit buffer has been written. + The specified line shall be incremented by the following value: (((number of '+' characters) -1) x count) +1 If the calculation would result in a number greater than the last line in the edit buffer, it shall be an error. Write lines from the edit buffer, starting at the new value of line, until count lines or the last line in the edit buffer has been written. =,. If more than a single '.' or '=' is specified, it shall be an error. The following steps shall be taken: 1. If count is zero, nothing shall be written. 2. Write as many of the N lines before the current line in the edit buffer as exist. If count or '!' was specified, N shall be: (count -1) /2 Otherwise, N shall be: (count -3) /2 If N is a number less than 3, no lines shall be written. 3. If '=' was specified as the type character, write a line consisting of the smaller of the number of columns in the display divided by two, or 40 '-' characters. 4. Write the current line. 5. Repeat step 3. 6. Write as many of the N lines after the current line in the edit buffer as exist. N shall be defined as in step 2. If N is a number less than 3, no lines shall be written. If count is less than 3, no lines shall be written. ^ The specified line shall be decremented by the following value: (((number of '^' characters) +1) x count) -1 If the calculation would result in a number less than 1, it shall be an error. Write lines from the edit buffer, starting at the new value of line, until count lines or the last line in the edit buffer has been written. Current line: Set to the last line written, unless the type is =, in which case, set to the specified line. Current column: Set to non-<blank>. Escape Synopsis: ! command [addr]! command The contents of the line after the '!' shall have '%', '#', and '!' characters expanded as described in Command Line Parsing in ex. If the expansion causes the text of the line to change, it shall be redisplayed, preceded by a single '!' character. The ex utility shall execute the program named by the shell edit option. It shall pass two arguments to the program; the first shall be -c, and the second shall be the expanded arguments to the ! command as a single argument. If no lines are specified, the standard input, standard output, and standard error of the program shall be set to the standard input, standard output, and standard error of the ex program when it was invoked. In addition, a warning message shall be written if the edit buffer has been modified since the last complete write, and the warn edit option is set. If lines are specified, they shall be passed to the program as standard input, and the standard output and standard error of the program shall replace those lines in the edit buffer. Each line in the program output (as delimited by <newline> characters or the end of the output if it is not immediately preceded by a <newline>), shall be a separate line in the edit buffer. Any occurrences of <carriage-return> and <newline> pairs in the output shall be treated as single <newline> characters. The specified lines shall be copied into the unnamed buffer before they are replaced, and the unnamed buffer shall become a line- mode buffer. If in ex mode, a single '!' character shall be written when the program completes. This command shall be affected by the shell and warn edit options. If no lines are specified, this command shall be affected by the autowrite and writeany edit options. If lines are specified, this command shall be affected by the autoprint edit option. Current line: 1. If no lines are specified, unchanged. 2. Otherwise, set to the last line read in, if any lines are read in. 3. Otherwise, set to the line before the first line of the lines specified, if that line exists. 4. Otherwise, set to the first line of the edit buffer if the edit buffer is not empty. 5. Otherwise, set to zero. Current column: If no lines are specified, unchanged. Otherwise, set to non-<blank>. Shift Left Synopsis: [2addr] <[< ...][count][flags] Shift the specified lines to the start of the line; the number of column positions to be shifted shall be the number of command characters times the value of the shiftwidth edit option. Only leading <blank> characters shall be deleted or changed into other <blank> characters in shifting; other characters shall not be affected. Lines to be shifted shall be copied into the unnamed buffer, which shall become a line-mode buffer. This command shall be affected by the autoprint edit option. Current line: Set to the last line in the lines specified. Current column: Set to non-<blank>. Shift Right Synopsis: [2addr] >[> ...][count][flags] Shift the specified lines away from the start of the line; the number of column positions to be shifted shall be the number of command characters times the value of the shiftwidth edit option. The shift shall be accomplished by adding <blank> characters as a prefix to the line or changing leading <blank> characters into other <blank> characters. Empty lines shall not be changed. Lines to be shifted shall be copied into the unnamed buffer, which shall become a line-mode buffer. This command shall be affected by the autoprint edit option. Current line: Set to the last line in the lines specified. Current column: Set to non-<blank>. <control>D Synopsis: <control>-D Write the next n lines, where n is the minimum of the values of the scroll edit option and the number of lines after the current line in the edit buffer. If the current line is the last line of the edit buffer it shall be an error. Current line: Set to the last line written. Current column: Set to non-<blank>. Write Line Number Synopsis: [1addr] = [flags] If line is not specified, it shall default to the last line in the edit buffer. Write the line number of the specified line. Current line: Unchanged. Current column: Unchanged. Execute Synopsis: [2addr] @ buffer [2addr] * buffer If no buffer is specified or is specified as '@' or '*', the last buffer executed shall be used. If no previous buffer has been executed, it shall be an error. For each line specified by the addresses, set the current line ('.') to the specified line, and execute the contents of the named buffer (as they were at the time the @ command was executed) as ex commands. For each line of a line-mode buffer, and all but the last line of a character-mode buffer, the ex command parser shall behave as if the line was terminated by a <newline>. If an error occurs during this process, or a line specified by the addresses does not exist when the current line would be set to it, or more than a single line was specified by the addresses, and the contents of the edit buffer are replaced (for example, by the ex :edit command) an error message shall be written, and no more commands resulting from the execution of this command shall be processed. Current line: As specified for the individual ex commands. Current column: As specified for the individual ex commands. Regular Expressions in ex The ex utility shall support regular expressions that are a superset of the basic regular expressions described in the Base Definitions volume of POSIX.12017, Section 9.3, Basic Regular Expressions. A null regular expression ("//") shall be equivalent to the last regular expression encountered. Regular expressions can be used in addresses to specify lines and, in some commands (for example, the substitute command), to specify portions of a line to be substituted. The following constructs can be used to enhance the basic regular expressions: \< Match the beginning of a word. (See the definition of word at the beginning of Command Descriptions in ex.) \> Match the end of a word. ~ Match the replacement part of the last substitute command. The <tilde> ('~') character can be escaped in a regular expression to become a normal character with no special meaning. The <backslash> shall be discarded. When the editor option magic is not set, the only characters with special meanings shall be '^' at the beginning of a pattern, '$' at the end of a pattern, and <backslash>. The characters '.', '*', '[', and '~' shall be treated as ordinary characters unless preceded by a <backslash>; when preceded by a <backslash> they shall regain their special meaning, or in the case of <backslash>, be handled as a single <backslash>. <backslash> characters used to escape other characters shall be discarded. Replacement Strings in ex The character '&' ('\&' if the editor option magic is not set) in the replacement string shall stand for the text matched by the pattern to be replaced. The character '~' ('\~' if magic is not set) shall be replaced by the replacement part of the previous substitute command. The sequence '\n', where n is an integer, shall be replaced by the text matched by the corresponding back- reference expression. If the corresponding back-reference expression does not match, then the characters '\n' shall be replaced by the empty string. The strings '\l', '\u', '\L', and '\U' can be used to modify the case of elements in the replacement string (using the '\&' or "\"digit) notation. The string '\l' ('\u') shall cause the character that follows to be converted to lowercase (uppercase). The string '\L' ('\U') shall cause all characters subsequent to it to be converted to lowercase (uppercase) as they are inserted by the substitution until the string '\e' or '\E', or the end of the replacement string, is encountered. Otherwise, any character following a <backslash> shall be treated as that literal character, and the escaping <backslash> shall be discarded. An example of case conversion with the s command is as follows: :p The cat sat on the mat. :s/\<.at\>/\u&/gp The Cat Sat on the Mat. :s/S\(.*\)M/S\U\1\eM/p The Cat SAT ON THE Mat. Edit Options in ex The ex utility has a number of options that modify its behavior. These options have default settings, which can be changed using the set command. Options are Boolean unless otherwise specified. autoindent, ai [Default unset] If autoindent is set, each line in input mode shall be indented (using first as many <tab> characters as possible, as determined by the editor option tabstop, and then using <space> characters) to align with another line, as follows: 1. If in open or visual mode and the text input is part of a line-oriented command (see the EXTENDED DESCRIPTION in vi(1p)), align to the first column. 2. Otherwise, if in open or visual mode, indentation for each line shall be set as follows: a. If a line was previously inserted as part of this command, it shall be set to the indentation of the last inserted line by default, or as otherwise specified for the <control>D character in Input Mode Commands in vi. b. Otherwise, it shall be set to the indentation of the previous current line, if any; otherwise, to the first column. 3. For the ex a, i, and c commands, indentation for each line shall be set as follows: a. If a line was previously inserted as part of this command, it shall be set to the indentation of the last inserted line by default, or as otherwise specified for the eof character in Scroll. b. Otherwise, if the command is the ex a command, it shall be set to the line appended after, if any; otherwise to the first column. c. Otherwise, if the command is the ex i command, it shall be set to the line inserted before, if any; otherwise to the first column. d. Otherwise, if the command is the ex c command, it shall be set to the indentation of the line replaced. autoprint, ap [Default set] If autoprint is set, the current line shall be written after each ex command that modifies the contents of the current edit buffer, and after each tag command for which the tag search pattern was found or tag line number was valid, unless: 1. The command was executed while in open or visual mode. 2. The command was executed as part of a global or v command or @ buffer execution. 3. The command was the form of the read command that reads a file into the edit buffer. 4. The command was the append, change, or insert command. 5. The command was not terminated by a <newline>. 6. The current line shall be written by a flag specified to the command; for example, delete # shall write the current line as specified for the flag modifier to the delete command, and not as specified by the autoprint edit option. autowrite, aw [Default unset] If autowrite is set, and the edit buffer has been modified since it was last completely written to any file, the contents of the edit buffer shall be written as if the ex write command had been specified without arguments, before each command affected by the autowrite edit option is executed. Appending the character '!' to the command name of any of the ex commands except '!' shall prevent the write. If the write fails, it shall be an error and the command shall not be executed. beautify, bf [Default unset] If beautify is set, all non-printable characters, other than <tab>, <newline>, and <form-feed> characters, shall be discarded from text read in from files. directory, dir [Default implementation-defined] The value of this option specifies the directory in which the editor buffer is to be placed. If this directory is not writable by the user, the editor shall quit. edcompatible, ed [Default unset] Causes the presence of g and c suffixes on substitute commands to be remembered, and toggled by repeating the suffixes. errorbells, eb [Default unset] If the editor is in ex mode, and the terminal does not support a standout mode (such as inverse video), and errorbells is set, error messages shall be preceded by alerting the terminal. exrc [Default unset] If exrc is set, ex shall access any .exrc file in the current directory, as described in Initialization in ex and vi. If exrc is not set, ex shall ignore any .exrc file in the current directory during initialization, unless the current directory is that named by the HOME environment variable. ignorecase, ic [Default unset] If ignorecase is set, characters that have uppercase and lowercase representations shall have those representations considered as equivalent for purposes of regular expression comparison. The ignorecase edit option shall affect all remembered regular expressions; for example, unsetting the ignorecase edit option shall cause a subsequent vi n command to search for the last basic regular expression in a case-sensitive fashion. list [Default unset] If list is set, edit buffer lines written while in ex command mode shall be written as specified for the print command with the l flag specified. In open or visual mode, each edit buffer line shall be displayed as specified for the ex print command with the l flag specified. In open or visual text input mode, when the cursor does not rest on any character in the line, it shall rest on the '$' marking the end of the line. magic [Default set] If magic is set, modify the interpretation of characters in regular expressions and substitution replacement strings (see Regular Expressions in ex and Replacement Strings in ex). mesg [Default set] If mesg is set, the permission for others to use the write or talk commands to write to the terminal shall be turned on while in open or visual mode. The shell-level command mesg n shall take precedence over any setting of the ex mesg option; that is, if mesg y was issued before the editor started (or in a shell escape), such as: :!mesg y the mesg option in ex shall suppress incoming messages, but the mesg option shall not enable incoming messages if mesg n was issued. number, nu [Default unset] If number is set, edit buffer lines written while in ex command mode shall be written with line numbers, in the format specified by the print command with the # flag specified. In ex text input mode, each line shall be preceded by the line number it will have in the file. In open or visual mode, each edit buffer line shall be displayed with a preceding line number, in the format specified by the ex print command with the # flag specified. This line number shall not be considered part of the line for the purposes of evaluating the current column; that is, column position 1 shall be the first column position after the format specified by the print command. paragraphs, para [Default in the POSIX locale IPLPPPQPP LIpplpipbp] The paragraphs edit option shall define additional paragraph boundaries for the open and visual mode commands. The paragraphs edit option can be set to a character string consisting of zero or more character pairs. It shall be an error to set it to an odd number of characters. prompt [Default set] If prompt is set, ex command mode input shall be prompted for with a <colon> (':'); when unset, no prompt shall be written. readonly [Default see text] If the readonly edit option is set, read-only mode shall be enabled (see Write). The readonly edit option shall be initialized to set if either of the following conditions are true: * The command-line option -R was specified. * Performing actions equivalent to the access() function called with the following arguments indicates that the file lacks write permission: 1. The current pathname is used as the path argument. 2. The constant W_OK is used as the amode argument. The readonly edit option may be initialized to set for other, implementation-defined reasons. The readonly edit option shall not be initialized to unset based on any special privileges of the user or process. The readonly edit option shall be reinitialized each time that the contents of the edit buffer are replaced (for example, by an edit or next command) unless the user has explicitly set it, in which case it shall remain set until the user explicitly unsets it. Once unset, it shall again be reinitialized each time that the contents of the edit buffer are replaced. redraw [Default unset] The editor simulates an intelligent terminal on a dumb terminal. (Since this is likely to require a large amount of output to the terminal, it is useful only at high transmission speeds.) remap [Default set] If remap is set, map translation shall allow for maps defined in terms of other maps; translation shall continue until a final product is obtained. If unset, only a one-step translation shall be done. report [Default 5] The value of this report edit option specifies what number of lines being added, copied, deleted, or modified in the edit buffer will cause an informational message to be written to the user. The following conditions shall cause an informational message. The message shall contain the number of lines added, copied, deleted, or modified, but is otherwise unspecified. * An ex or vi editor command, other than open, undo, or visual, that modifies at least the value of the report edit option number of lines, and which is not part of an ex global or v command, or ex or vi buffer execution, shall cause an informational message to be written. * An ex yank or vi y or Y command, that copies at least the value of the report edit option plus 1 number of lines, and which is not part of an ex global or v command, or ex or vi buffer execution, shall cause an informational message to be written. * An ex global, v, open, undo, or visual command or ex or vi buffer execution, that adds or deletes a total of at least the value of the report edit option number of lines, and which is not part of an ex global or v command, or ex or vi buffer execution, shall cause an informational message to be written. (For example, if 3 lines were added and 8 lines deleted during an ex visual command, 5 would be the number compared against the report edit option after the command completed.) scroll, scr [Default (number of lines in the display -1)/2] The value of the scroll edit option shall determine the number of lines scrolled by the ex <control>D and z commands. For the vi <control>D and <control>U commands, it shall be the initial number of lines to scroll when no previous <control>D or <control>U command has been executed. sections [Default in the POSIX locale NHSHH HUnhsh] The sections edit option shall define additional section boundaries for the open and visual mode commands. The sections edit option can be set to a character string consisting of zero or more character pairs; it shall be an error to set it to an odd number of characters. shell, sh [Default from the environment variable SHELL] The value of this option shall be a string. The default shall be taken from the SHELL environment variable. If the SHELL environment variable is null or empty, the sh (see sh(1p)) utility shall be the default. shiftwidth, sw [Default 8] The value of this option shall give the width in columns of an indentation level used during autoindentation and by the shift commands (< and >). showmatch, sm [Default unset] The functionality described for the showmatch edit option need not be supported on block-mode terminals or terminals with insufficient capabilities. If showmatch is set, in open or visual mode, when a ')' or '}' is typed, if the matching '(' or '{' is currently visible on the display, the matching '(' or '{' shall be flagged moving the cursor to its location for an unspecified amount of time. showmode [Default unset] If showmode is set, in open or visual mode, the current mode that the editor is in shall be displayed on the last line of the display. Command mode and text input mode shall be differentiated; other unspecified modes and implementation- defined information may be displayed. slowopen [Default unset] If slowopen is set during open and visual text input modes, the editor shall not update portions of the display other than those display line columns that display the characters entered by the user (see Input Mode Commands in vi). tabstop, ts [Default 8] The value of this edit option shall specify the column boundary used by a <tab> in the display (see autoprint, ap and Input Mode Commands in vi). taglength, tl [Default zero] The value of this edit option shall specify the maximum number of characters that are considered significant in the user-specified tag name and in the tag name from the tags file. If the value is zero, all characters in both tag names shall be significant. tags [Default see text] The value of this edit option shall be a string of <blank>-delimited pathnames of files used by the tag command. The default value is unspecified. term [Default from the environment variable TERM] The value of this edit option shall be a string. The default shall be taken from the TERM variable in the environment. If the TERM environment variable is empty or null, the default is unspecified. The editor shall use the value of this edit option to determine the type of the display device. The results are unspecified if the user changes the value of the term edit option after editor initialization. terse [Default unset] If terse is set, error messages may be less verbose. However, except for this caveat, error messages are unspecified. Furthermore, not all error messages need change for different settings of this option. warn [Default set] If warn is set, and the contents of the edit buffer have been modified since they were last completely written, the editor shall write a warning message before certain ! commands (see Escape). window [Default see text] A value used in open and visual mode, by the <control>B and <control>F commands, and, in visual mode, to specify the number of lines displayed when the screen is repainted. If the -w command-line option is not specified, the default value shall be set to the value of the LINES environment variable. If the LINES environment variable is empty or null, the default shall be the number of lines in the display minus 1. Setting the window edit option to zero or to a value greater than the number of lines in the display minus 1 (either explicitly or based on the -w option or the LINES environment variable) shall cause the window edit option to be set to the number of lines in the display minus 1. The baud rate of the terminal line may change the default in an implementation-defined manner. wrapmargin, wm [Default 0] If the value of this edit option is zero, it shall have no effect. If not in the POSIX locale, the effect of this edit option is implementation-defined. Otherwise, it shall specify a number of columns from the ending margin of the terminal. During open and visual text input modes, for each character for which any part of the character is displayed in a column that is less than wrapmargin columns from the ending margin of the display line, the editor shall behave as follows: 1. If the character triggering this event is a <blank>, it, and all immediately preceding <blank> characters on the current line entered during the execution of the current text input command, shall be discarded, and the editor shall behave as if the user had entered a single <newline> instead. In addition, if the next user-entered character is a <space>, it shall be discarded as well. 2. Otherwise, if there are one or more <blank> characters on the current line immediately preceding the last group of inserted non-<blank> characters which was entered during the execution of the current text input command, the <blank> characters shall be replaced as if the user had entered a single <newline> instead. If the autoindent edit option is set, and the events described in 1. or 2. are performed, any <blank> characters at or after the cursor in the current line shall be discarded. The ending margin shall be determined by the system or overridden by the user, as described for COLUMNS in the ENVIRONMENT VARIABLES section and the Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables. wrapscan, ws [Default set] If wrapscan is set, searches (the ex / or ? addresses, or open and visual mode /, ?, N, and n commands) shall wrap around the beginning or end of the edit buffer; when unset, searches shall stop at the beginning or end of the edit buffer. writeany, wa [Default unset] If writeany is set, some of the checks performed when executing the ex write commands shall be inhibited, as described in editor option autowrite. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. >0 An error occurred. CONSEQUENCES OF ERRORS top When any error is encountered and the standard input is not a terminal device file, ex shall not write the file or return to command or text input mode, and shall terminate with a non-zero exit status. Otherwise, when an unrecoverable error is encountered, it shall be equivalent to a SIGHUP asynchronous event. Otherwise, when an error is encountered, the editor shall behave as specified in Command Line Parsing in ex. The following sections are informative. APPLICATION USAGE top If a SIGSEGV signal is received while ex is saving a file, the file might not be successfully saved. The next command can accept more than one file, so usage such as: next `ls [abc]*` is valid; it would not be valid for the edit or read commands, for example, because they expect only one file and unspecified results occur. EXAMPLES top None. RATIONALE top The ex/vi specification is based on the historical practice found in the 4 BSD and System V implementations of ex and vi. A restricted editor (both the historical red utility and modifications to ex) were considered and rejected for inclusion. Neither option provided the level of security that users might expect. It is recognized that ex visual mode and related features would be difficult, if not impossible, to implement satisfactorily on a block-mode terminal, or a terminal without any form of cursor addressing; thus, it is not a mandatory requirement that such features should work on all terminals. It is the intention, however, that an ex implementation should provide the full set of capabilities on all terminals capable of supporting them. Options The -c replacement for +command was inspired by the -e option of sed. Historically, all such commands (see edit and next as well) were executed from the last line of the edit buffer. This meant, for example, that "+/pattern" would fail unless the wrapscan option was set. POSIX.12008 requires conformance to historical practice. The +command option is no longer specified by POSIX.12008 but may be present in some implementations. Historically, some implementations restricted the ex commands that could be listed as part of the command line arguments. For consistency, POSIX.12008 does not permit these restrictions. In historical implementations of the editor, the -R option (and the readonly edit option) only prevented overwriting of files; appending to files was still permitted, mapping loosely into the csh noclobber variable. Some implementations, however, have not followed this semantic, and readonly does not permit appending either. POSIX.12008 follows the latter practice, believing that it is a more obvious and intuitive meaning of readonly. The -s option suppresses all interactive user feedback and is useful for editing scripts in batch jobs. The list of specific effects is historical practice. The terminal type ``incapable of supporting open and visual modes'' has historically been named ``dumb''. The -t option was required because the ctags utility appears in POSIX.12008 and the option is available in all historical implementations of ex. Historically, the ex and vi utilities accepted a -x option, which did encryption based on the algorithm found in the historical crypt utility. The -x option for encryption, and the associated crypt utility, were omitted because the algorithm used was not specifiable and the export control laws of some nations make it difficult to export cryptographic technology. In addition, it did not historically provide the level of security that users might expect. Standard Input An end-of-file condition is not equivalent to an end-of-file character. A common end-of-file character, <control>D, is historically an ex command. There was no maximum line length in historical implementations of ex. Specifically, as it was parsed in chunks, the addresses had a different maximum length than the filenames. Further, the maximum line buffer size was declared as BUFSIZ, which was different lengths on different systems. This version selected the value of {LINE_MAX} to impose a reasonable restriction on portable usage of ex and to aid test suite writers in their development of realistic tests that exercise this limit. Input Files It was an explicit decision by the standard developers that a <newline> be added to any file lacking one. It was believed that this feature of ex and vi was relied on by users in order to make text files lacking a trailing <newline> more portable. It is recognized that this will require a user-specified option or extension for implementations that permit ex and vi to edit files of type other than text if such files are not otherwise identified by the system. It was agreed that the ability to edit files of arbitrary type can be useful, but it was not considered necessary to mandate that an ex or vi implementation be required to handle files other than text files. The paragraph in the INPUT FILES section, ``By default, ...'', is intended to close a long-standing security problem in ex and vi; that of the ``modeline'' or ``modelines'' edit option. This feature allows any line in the first or last five lines of the file containing the strings "ex:" or "vi:" (and, apparently, "ei:" or "vx:") to be a line containing editor commands, and ex interprets all the text up to the next ':' or <newline> as a command. Consider the consequences, for example, of an unsuspecting user using ex or vi as the editor when replying to a mail message in which a line such as: ex:! rm -rf : appeared in the signature lines. The standard developers believed strongly that an editor should not by default interpret any lines of a file. Vendors are strongly urged to delete this feature from their implementations of ex and vi. Asynchronous Events The intention of the phrase ``complete write'' is that the entire edit buffer be written to stable storage. The note regarding temporary files is intended for implementations that use temporary files to back edit buffers unnamed by the user. Historically, SIGQUIT was ignored by ex, but was the equivalent of the Q command in visual mode; that is, it exited visual mode and entered ex mode. POSIX.12008 permits, but does not require, this behavior. Historically, SIGINT was often used by vi users to terminate text input mode (<control>C is often easier to enter than <ESC>). Some implementations of vi alerted the terminal on this event, and some did not. POSIX.12008 requires that SIGINT behave identically to <ESC>, and that the terminal not be alerted. Historically, suspending the ex editor during text input mode was similar to SIGINT, as completed lines were retained, but any partial line discarded, and the editor returned to command mode. POSIX.12008 is silent on this issue; implementations are encouraged to follow historical practice, where possible. Historically, the vi editor did not treat SIGTSTP as an asynchronous event, and it was therefore impossible to suspend the editor in visual text input mode. There are two major reasons for this. The first is that SIGTSTP is a broadcast signal on UNIX systems, and the chain of events where the shell execs an application that then execs vi usually caused confusion for the terminal state if SIGTSTP was delivered to the process group in the default manner. The second was that most implementations of the UNIX curses package did not handle SIGTSTP safely, and the receipt of SIGTSTP at the wrong time would cause them to crash. POSIX.12008 is silent on this issue; implementations are encouraged to treat suspension as an asynchronous event if possible. Historically, modifications to the edit buffer made before SIGINT interrupted an operation were retained; that is, anywhere from zero to all of the lines to be modified might have been modified by the time the SIGINT arrived. These changes were not discarded by the arrival of SIGINT. POSIX.12008 permits this behavior, noting that the undo command is required to be able to undo these partially completed commands. The action taken for signals other than SIGINT, SIGCONT, SIGHUP, and SIGTERM is unspecified because some implementations attempt to save the edit buffer in a useful state when other signals are received. Standard Error For ex/vi, diagnostic messages are those messages reported as a result of a failed attempt to invoke ex or vi, such as invalid options or insufficient resources, or an abnormal termination condition. Diagnostic messages should not be confused with the error messages generated by inappropriate or illegal user commands. Initialization in ex and vi If an ex command (other than cd, chdir, or source) has a filename argument, one or both of the alternate and current pathnames will be set. Informally, they are set as follows: 1. If the ex command is one that replaces the contents of the edit buffer, and it succeeds, the current pathname will be set to the filename argument (the first filename argument in the case of the next command) and the alternate pathname will be set to the previous current pathname, if there was one. 2. In the case of the file read/write forms of the read and write commands, if there is no current pathname, the current pathname will be set to the filename argument. 3. Otherwise, the alternate pathname will be set to the filename argument. For example, :edit foo and :recover foo, when successful, set the current pathname, and, if there was a previous current pathname, the alternate pathname. The commands :write, !command, and :edit set neither the current or alternate pathnames. If the :edit foo command were to fail for some reason, the alternate pathname would be set. The read and write commands set the alternate pathname to their file argument, unless the current pathname is not set, in which case they set the current pathname to their file arguments. The alternate pathname was not historically set by the :source command. POSIX.12008 requires conformance to historical practice. Implementations adding commands that take filenames as arguments are encouraged to set the alternate pathname as described here. Historically, ex and vi read the .exrc file in the $HOME directory twice, if the editor was executed in the $HOME directory. POSIX.12008 prohibits this behavior. Historically, the 4 BSD ex and vi read the $HOME and local .exrc files if they were owned by the real ID of the user, or the sourceany option was set, regardless of other considerations. This was a security problem because it is possible to put normal UNIX system commands inside a .exrc file. POSIX.12008 does not specify the sourceany option, and historical implementations are encouraged to delete it. The .exrc files must be owned by the real ID of the user, and not writable by anyone other than the owner. The appropriate privileges exception is intended to permit users to acquire special privileges, but continue to use the .exrc files in their home directories. System V Release 3.2 and later vi implementations added the option [no]exrc. The behavior is that local .exrc files are read-only if the exrc option is set. The default for the exrc option was off, so by default, local .exrc files were not read. The problem this was intended to solve was that System V permitted users to give away files, so there is no possible ownership or writeability test to ensure that the file is safe. This is still a security problem on systems where users can give away files, but there is nothing additional that POSIX.12008 can do. The implementation-defined exception is intended to permit groups to have local .exrc files that are shared by users, by creating pseudo-users to own the shared files. POSIX.12008 does not mention system-wide ex and vi start-up files. While they exist in several implementations of ex and vi, they are not present in any implementations considered historical practice by POSIX.12008. Implementations that have such files should use them only if they are owned by the real user ID or an appropriate user (for example, root on UNIX systems) and if they are not writable by any user other than their owner. System-wide start-up files should be read before the EXINIT variable, $HOME/.exrc, or local .exrc files are evaluated. Historically, any ex command could be entered in the EXINIT variable or the .exrc file, although ones requiring that the edit buffer already contain lines of text generally caused historical implementations of the editor to drop core. POSIX.12008 requires that any ex command be permitted in the EXINIT variable and .exrc files, for simplicity of specification and consistency, although many of them will obviously fail under many circumstances. The initialization of the contents of the edit buffer uses the phrase ``the effect shall be'' with regard to various ex commands. The intent of this phrase is that edit buffer contents loaded during the initialization phase not be lost; that is, loading the edit buffer should fail if the .exrc file read in the contents of a file and did not subsequently write the edit buffer. An additional intent of this phrase is to specify that the initial current line and column is set as specified for the individual ex commands. Historically, the -t option behaved as if the tag search were a +command; that is, it was executed from the last line of the file specified by the tag. This resulted in the search failing if the pattern was a forward search pattern and the wrapscan edit option was not set. POSIX.12008 does not permit this behavior, requiring that the search for the tag pattern be performed on the entire file, and, if not found, that the current line be set to a more reasonable location in the file. Historically, the empty edit buffer presented for editing when a file was not specified by the user was unnamed. This is permitted by POSIX.12008; however, implementations are encouraged to provide users a temporary filename for this buffer because it permits them the use of ex commands that use the current pathname during temporary edit sessions. Historically, the file specified using the -t option was not part of the current argument list. This practice is permitted by POSIX.12008; however, implementations are encouraged to include its name in the current argument list for consistency. Historically, the -c command was generally not executed until a file that already exists was edited. POSIX.12008 requires conformance to this historical practice. Commands that could cause the -c command to be executed include the ex commands edit, next, recover, rewind, and tag, and the vi commands <control>^ and <control>]. Historically, reading a file into an edit buffer did not cause the -c command to be executed (even though it might set the current pathname) with the exception that it did cause the -c command to be executed if: the editor was in ex mode, the edit buffer had no current pathname, the edit buffer was empty, and no read commands had yet been attempted. For consistency and simplicity of specification, POSIX.12008 does not permit this behavior. Historically, the -r option was the same as a normal edit session if there was no recovery information available for the file. This allowed users to enter: vi -r *.c and recover whatever files were recoverable. In some implementations, recovery was attempted only on the first file named, and the file was not entered into the argument list; in others, recovery was attempted for each file named. In addition, some historical implementations ignored -r if -t was specified or did not support command line file arguments with the -t option. For consistency and simplicity of specification, POSIX.12008 disallows these special cases, and requires that recovery be attempted the first time each file is edited. Historically, vi initialized the ` and ' marks, but ex did not. This meant that if the first command in ex mode was visual or if an ex command was executed first (for example, vi +10 file), vi was entered without the marks being initialized. Because the standard developers believed the marks to be generally useful, and for consistency and simplicity of specification, POSIX.12008 requires that they always be initialized if in open or visual mode, or if in ex mode and the edit buffer is not empty. Not initializing it in ex mode if the edit buffer is empty is historical practice; however, it has always been possible to set (and use) marks in empty edit buffers in open and visual mode edit sessions. Addressing Historically, ex and vi accepted the additional addressing forms '\/' and '\?'. They were equivalent to "//" and "??", respectively. They are not required by POSIX.12008, mostly because nobody can remember whether they ever did anything different historically. Historically, ex and vi permitted an address of zero for several commands, and permitted the % address in empty files for others. For consistency, POSIX.12008 requires support for the former in the few commands where it makes sense, and disallows it otherwise. In addition, because POSIX.12008 requires that % be logically equivalent to "1,$", it is also supported where it makes sense and disallowed otherwise. Historically, the % address could not be followed by further addresses. For consistency and simplicity of specification, POSIX.12008 requires that additional addresses be supported. All of the following are valid addresses: +++ Three lines after the current line. /re/- One line before the next occurrence of re. -2 Two lines before the current line. 3 ---- 2 Line one (note intermediate negative address). 1 2 3 Line six. Any number of addresses can be provided to commands taking addresses; for example, "1,2,3,4,5p" prints lines 4 and 5, because two is the greatest valid number of addresses accepted by the print command. This, in combination with the <semicolon> delimiter, permits users to create commands based on ordered patterns in the file. For example, the command 3;/foo/;+2print will display the first line after line 3 that contains the pattern foo, plus the next two lines. Note that the address 3; must be evaluated before being discarded because the search origin for the /foo/ command depends on this. Historically, values could be added to addresses by including them after one or more <blank> characters; for example, 3 - 5p wrote the seventh line of the file, and /foo/ 5 was the same as /foo/+5. However, only absolute values could be added; for example, 5 /foo/ was an error. POSIX.12008 requires conformance to historical practice. Address offsets are separately specified from addresses because they could historically be provided to visual mode search commands. Historically, any missing addresses defaulted to the current line. This was true for leading and trailing <comma>-delimited addresses, and for trailing <semicolon>-delimited addresses. For consistency, POSIX.12008 requires it for leading <semicolon> addresses as well. Historically, ex and vi accepted the '^' character as both an address and as a flag offset for commands. In both cases it was identical to the '-' character. POSIX.12008 does not require or prohibit this behavior. Historically, the enhancements to basic regular expressions could be used in addressing; for example, '~', '\<', and '\>'. POSIX.12008 requires conformance to historical practice; that is, that regular expression usage be consistent, and that regular expression enhancements be supported wherever regular expressions are used. Command Line Parsing in ex Historical ex command parsing was even more complex than that described here. POSIX.12008 requires the subset of the command parsing that the standard developers believed was documented and that users could reasonably be expected to use in a portable fashion, and that was historically consistent between implementations. (The discarded functionality is obscure, at best.) Historical implementations will require changes in order to comply with POSIX.12008; however, users are not expected to notice any of these changes. Most of the complexity in ex parsing is to handle three special termination cases: 1. The !, global, v, and the filter versions of the read and write commands are delimited by <newline> characters (they can contain <vertical-line> characters that are usually shell pipes). 2. The ex, edit, next, and visual in open and visual mode commands all take ex commands, optionally containing <vertical-line> characters, as their first arguments. 3. The s command takes a regular expression as its first argument, and uses the delimiting characters to delimit the command. Historically, <vertical-line> characters in the +command argument of the ex, edit, next, vi, and visual commands, and in the pattern and replacement parts of the s command, did not delimit the command, and in the filter cases for read and write, and the !, global, and v commands, they did not delimit the command at all. For example, the following commands are all valid: :edit +25 | s/abc/ABC/ file.c :s/ | /PIPE/ :read !spell % | columnate :global/pattern/p | l :s/a/b/ | s/c/d | set Historically, empty or <blank> filled lines in .exrc files and sourced files (as well as EXINIT variables and ex command scripts) were treated as default commands; that is, print commands. POSIX.12008 specifically requires that they be ignored when encountered in .exrc and sourced files to eliminate a common source of new user error. Historically, ex commands with multiple adjacent (or <blank>-separated) vertical lines were handled oddly when executed from ex mode. For example, the command ||| <carriage- return>, when the cursor was on line 1, displayed lines 2, 3, and 5 of the file. In addition, the command | would only display the line after the next line, instead of the next two lines. The former worked more logically when executed from vi mode, and displayed lines 2, 3, and 4. POSIX.12008 requires the vi behavior; that is, a single default command and line number increment for each command separator, and trailing <newline> characters after <vertical-line> separators are discarded. Historically, ex permitted a single extra <colon> as a leading command character; for example, :g/pattern/:p was a valid command. POSIX.12008 generalizes this to require that any number of leading <colon> characters be stripped. Historically, any prefix of the delete command could be followed without intervening <blank> characters by a flag character because in the command d p, p is interpreted as the buffer p. POSIX.12008 requires conformance to historical practice. Historically, the k command could be followed by the mark name without intervening <blank> characters. POSIX.12008 requires conformance to historical practice. Historically, the s command could be immediately followed by flag and option characters; for example, s/e/E/|s|sgc3p was a valid command. However, flag characters could not stand alone; for example, the commands sp and s l would fail, while the command sgp and s gl would succeed. (Obviously, the '#' flag character was used as a delimiter character if it followed the command.) Another issue was that option characters had to precede flag characters even when the command was fully specified; for example, the command s/e/E/pg would fail, while the command s/e/E/gp would succeed. POSIX.12008 requires conformance to historical practice. Historically, the first command name that had a prefix matching the input from the user was the executed command; for example, ve, ver, and vers all executed the version command. Commands were in a specific order, however, so that a matched append, not abbreviate. POSIX.12008 requires conformance to historical practice. The restriction on command search order for implementations with extensions is to avoid the addition of commands such that the historical prefixes would fail to work portably. Historical implementations of ex and vi did not correctly handle multiple ex commands, separated by <vertical-line> characters, that entered or exited visual mode or the editor. Because implementations of vi exist that do not exhibit this failure mode, POSIX.12008 does not permit it. The requirement that alphabetic command names consist of all following alphabetic characters up to the next non-alphabetic character means that alphabetic command names must be separated from their arguments by one or more non-alphabetic characters, normally a <blank> or '!' character, except as specified for the exceptions, the delete, k, and s commands. Historically, the repeated execution of the ex default print commands (<control>D, eof, <newline>, <carriage-return>) erased any prompting character and displayed the next lines without scrolling the terminal; that is, immediately below any previously displayed lines. This provided a cleaner presentation of the lines in the file for the user. POSIX.12008 does not require this behavior because it may be impossible in some situations; however, implementations are strongly encouraged to provide this semantic if possible. Historically, it was possible to change files in the middle of a command, and have the rest of the command executed in the new file; for example: :edit +25 file.c | s/abc/ABC/ | 1 was a valid command, and the substitution was attempted in the newly edited file. POSIX.12008 requires conformance to historical practice. The following commands are examples that exercise the ex parser: echo 'foo | bar' > file1; echo 'foo/bar' > file2; vi :edit +1 | s/|/PIPE/ | w file1 | e file2 | 1 | s/\//SLASH/ | wq Historically, there was no protection in editor implementations to avoid ex global, v, @, or * commands changing edit buffers during execution of their associated commands. Because this would almost invariably result in catastrophic failure of the editor, and implementations exist that do exhibit these problems, POSIX.12008 requires that changing the edit buffer during a global or v command, or during a @ or * command for which there will be more than a single execution, be an error. Implementations supporting multiple edit buffers simultaneously are strongly encouraged to apply the same semantics to switching between buffers as well. The ex command quoting required by POSIX.12008 is a superset of the quoting in historical implementations of the editor. For example, it was not historically possible to escape a <blank> in a filename; for example, :edit foo\\\ bar would report that too many filenames had been entered for the edit command, and there was no method of escaping a <blank> in the first argument of an edit, ex, next, or visual command at all. POSIX.12008 extends historical practice, requiring that quoting behavior be made consistent across all ex commands, except for the map, unmap, abbreviate, and unabbreviate commands, which historically used <control>V instead of <backslash> characters for quoting. For those four commands, POSIX.12008 requires conformance to historical practice. Backslash quoting in ex is non-intuitive. <backslash>-escapes are ignored unless they escape a special character; for example, when performing file argument expansion, the string "\\%" is equivalent to '\%', not "\<current pathname>". This can be confusing for users because <backslash> is usually one of the characters that causes shell expansion to be performed, and therefore shell quoting rules must be taken into consideration. Generally, quoting characters are only considered if they escape a special character, and a quoting character must be provided for each layer of parsing for which the character is special. As another example, only a single <backslash> is necessary for the '\l' sequence in substitute replacement patterns, because the character 'l' is not special to any parsing layer above it. <control>V quoting in ex is slightly different from backslash quoting. In the four commands where <control>V quoting applies (abbreviate, unabbreviate, map, and unmap), any character may be escaped by a <control>V whether it would have a special meaning or not. POSIX.12008 requires conformance to historical practice. Historical implementations of the editor did not require delimiters within character classes to be escaped; for example, the command :s/[/]// on the string "xxx/yyy" would delete the '/' from the string. POSIX.12008 disallows this historical practice for consistency and because it places a large burden on implementations by requiring that knowledge of regular expressions be built into the editor parser. Historically, quoting <newline> characters in ex commands was handled inconsistently. In most cases, the <newline> character always terminated the command, regardless of any preceding escape character, because <backslash> characters did not escape <newline> characters for most ex commands. However, some ex commands (for example, s, map, and abbreviation) permitted <newline> characters to be escaped (although in the case of map and abbreviation, <control>V characters escaped them instead of <backslash> characters). This was true in not only the command line, but also .exrc and sourced files. For example, the command: map = foo<control-V><newline>bar would succeed, although it was sometimes difficult to get the <control>V and the inserted <newline> passed to the ex parser. For consistency and simplicity of specification, POSIX.12008 requires that it be possible to escape <newline> characters in ex commands at all times, using <backslash> characters for most ex commands, and using <control>V characters for the map and abbreviation commands. For example, the command print<newline>list is required to be parsed as the single command print<newline>list. While this differs from historical practice, POSIX.12008 developers believed it unlikely that any script or user depended on the historical behavior. Historically, an error in a command specified using the -c option did not cause the rest of the -c commands to be discarded. POSIX.12008 disallows this for consistency with mapped keys, the @, global, source, and v commands, the EXINIT environment variable, and the .exrc files. Input Editing in ex One of the common uses of the historical ex editor is over slow network connections. Editors that run in canonical mode can require far less traffic to and from, and far less processing on, the host machine, as well as more easily supporting block-mode terminals. For these reasons, POSIX.12008 requires that ex be implemented using canonical mode input processing, as was done historically. POSIX.12008 does not require the historical 4 BSD input editing characters ``word erase'' or ``literal next''. For this reason, it is unspecified how they are handled by ex, although they must have the required effect. Implementations that resolve them after the line has been ended using a <newline> or <control>M character, and implementations that rely on the underlying system terminal support for this processing, are both conforming. Implementations are strongly urged to use the underlying system functionality, if at all possible, for compatibility with other system text input interfaces. Historically, when the eof character was used to decrement the autoindent level, the cursor moved to display the new end of the autoindent characters, but did not move the cursor to a new line, nor did it erase the <control>D character from the line. POSIX.12008 does not specify that the cursor remain on the same line or that the rest of the line is erased; however, implementations are strongly encouraged to provide the best possible user interface; that is, the cursor should remain on the same line, and any <control>D character on the line should be erased. POSIX.12008 does not require the historical 4 BSD input editing character ``reprint'', traditionally <control>R, which redisplayed the current input from the user. For this reason, and because the functionality cannot be implemented after the line has been terminated by the user, POSIX.12008 makes no requirements about this functionality. Implementations are strongly urged to make this historical functionality available, if possible. Historically, <control>Q did not perform a literal next function in ex, as it did in vi. POSIX.12008 requires conformance to historical practice to avoid breaking historical ex scripts and .exrc files. eof Whether the eof character immediately modifies the autoindent characters in the prompt is left unspecified so that implementations can conform in the presence of systems that do not support this functionality. Implementations are encouraged to modify the line and redisplay it immediately, if possible. The specification of the handling of the eof character differs from historical practice only in that eof characters are not discarded if they follow normal characters in the text input. Historically, they were always discarded. Command Descriptions in ex Historically, several commands (for example, global, v, visual, s, write, wq, yank, !, <, >, &, and ~) were executable in empty files (that is, the default address(es) were 0), or permitted explicit addresses of 0 (for example, 0 was a valid address, or 0,0 was a valid range). Addresses of 0, or command execution in an empty file, make sense only for commands that add new text to the edit buffer or write commands (because users may wish to write empty files). POSIX.12008 requires this behavior for such commands and disallows it otherwise, for consistency and simplicity of specification. A count to an ex command has been historically corrected to be no greater than the last line in a file; for example, in a five-line file, the command 1,6print would fail, but the command 1print300 would succeed. POSIX.12008 requires conformance to historical practice. Historically, the use of flags in ex commands could be obscure. General historical practice was as described by POSIX.12008, but there were some special cases. For instance, the list, number, and print commands ignored trailing address offsets; for example, 3p +++# would display line 3, and 3 would be the current line after the execution of the command. The open and visual commands ignored both the trailing offsets and the trailing flags. Also, flags specified to the open and visual commands interacted badly with the list edit option, and setting and then unsetting it during the open/visual session would cause vi to stop displaying lines in the specified format. For consistency and simplicity of specification, POSIX.12008 does not permit any of these exceptions to the general rule. POSIX.12008 uses the word copy in several places when discussing buffers. This is not intended to imply implementation. Historically, ex users could not specify numeric buffers because of the ambiguity this would cause; for example, in the command 3 delete 2, it is unclear whether 2 is a buffer name or a count. POSIX.12008 requires conformance to historical practice by default, but does not preclude extensions. Historically, the contents of the unnamed buffer were frequently discarded after commands that did not explicitly affect it; for example, when using the edit command to switch files. For consistency and simplicity of specification, POSIX.12008 does not permit this behavior. The ex utility did not historically have access to the numeric buffers, and, furthermore, deleting lines in ex did not modify their contents. For example, if, after doing a delete in vi, the user switched to ex, did another delete, and then switched back to vi, the contents of the numeric buffers would not have changed. POSIX.12008 requires conformance to historical practice. Numeric buffers are described in the ex utility in order to confine the description of buffers to a single location in POSIX.12008. The metacharacters that trigger shell expansion in file arguments match historical practice, as does the method for doing shell expansion. Implementations wishing to provide users with the flexibility to alter the set of metacharacters are encouraged to provide a shellmeta string edit option. Historically, ex commands executed from vi refreshed the screen when it did not strictly need to do so; for example, :!date > /dev/null does not require a screen refresh because the output of the UNIX date command requires only a single line of the screen. POSIX.12008 requires that the screen be refreshed if it has been overwritten, but makes no requirements as to how an implementation should make that determination. Implementations may prompt and refresh the screen regardless. Abbreviate Historical practice was that characters that were entered as part of an abbreviation replacement were subject to map expansions, the showmatch edit option, further abbreviation expansions, and so on; that is, they were logically pushed onto the terminal input queue, and were not a simple replacement. POSIX.12008 requires conformance to historical practice. Historical practice was that whenever a non-word character (that had not been escaped by a <control>V) was entered after a word character, vi would check for abbreviations. The check was based on the type of the character entered before the word character of the word/non-word pair that triggered the check. The word character of the word/non-word pair that triggered the check and all characters entered before the trigger pair that were of that type were included in the check, with the exception of <blank> characters, which always delimited the abbreviation. This means that, for the abbreviation to work, the lhs must end with a word character, there can be no transitions from word to non-word characters (or vice versa) other than between the last and next-to-last characters in the lhs, and there can be no <blank> characters in the lhs. In addition, because of the historical quoting rules, it was impossible to enter a literal <control>V in the lhs. POSIX.12008 requires conformance to historical practice. Historical implementations did not inform users when abbreviations that could never be used were entered; implementations are strongly encouraged to do so. For example, the following abbreviations will work: :ab (p REPLACE :ab p REPLACE :ab ((p REPLACE The following abbreviations will not work: :ab ( REPLACE :ab (pp REPLACE Historical practice is that words on the vi colon command line were subject to abbreviation expansion, including the arguments to the abbrev (and more interestingly) the unabbrev command. Because there are implementations that do not do abbreviation expansion for the first argument to those commands, this is permitted, but not required, by POSIX.12008. However, the following sequence: :ab foo bar :ab foo baz resulted in the addition of an abbreviation of "baz" for the string "bar" in historical ex/vi, and the sequence: :ab foo1 bar :ab foo2 bar :unabbreviate foo2 deleted the abbreviation "foo1", not "foo2". These behaviors are not permitted by POSIX.12008 because they clearly violate the expectations of the user. It was historical practice that <control>V, not <backslash>, characters be interpreted as escaping subsequent characters in the abbreviate command. POSIX.12008 requires conformance to historical practice; however, it should be noted that an abbreviation containing a <blank> will never work. Append Historically, any text following a <vertical-line> command separator after an append, change, or insert command became part of the insert text. For example, in the command: :g/pattern/append|stuff1 a line containing the text "stuff1" would be appended to each line matching pattern. It was also historically valid to enter: :append|stuff1 stuff2 . and the text on the ex command line would be appended along with the text inserted after it. There was an historical bug, however, that the user had to enter two terminating lines (the '.' lines) to terminate text input mode in this case. POSIX.12008 requires conformance to historical practice, but disallows the historical need for multiple terminating lines. Change See the RATIONALE for the append command. Historical practice for cursor positioning after the change command when no text is input, is as described in POSIX.12008. However, one System V implementation is known to have been modified such that the cursor is positioned on the first address specified, and not on the line before the first address. POSIX.12008 disallows this modification for consistency. Historically, the change command did not support buffer arguments, although some implementations allow the specification of an optional buffer. This behavior is neither required nor disallowed by POSIX.12008. Change Directory A common extension in ex implementations is to use the elements of a cdpath edit option as prefix directories for path arguments to chdir that are relative pathnames and that do not have '.' or ".." as their first component. Elements in the cdpath edit option are <colon>-separated. The initial value of the cdpath edit option is the value of the shell CDPATH environment variable. This feature was not included in POSIX.12008 because it does not exist in any of the implementations considered historical practice. Copy Historical implementations of ex permitted copies to lines inside of the specified range; for example, :2,5copy3 was a valid command. POSIX.12008 requires conformance to historical practice. Delete POSIX.12008 requires support for the historical parsing of a delete command followed by flags, without any intervening <blank> characters. For example: 1dp Deletes the first line and prints the line that was second. 1delep As for 1dp. 1d Deletes the first line, saving it in buffer p. 1d p1l (Pee-one-ell.) Deletes the first line, saving it in buffer p, and listing the line that was second. Edit Historically, any ex command could be entered as a +command argument to the edit command, although some (for example, insert and append) were known to confuse historical implementations. For consistency and simplicity of specification, POSIX.12008 requires that any command be supported as an argument to the edit command. Historically, the command argument was executed with the current line set to the last line of the file, regardless of whether the edit command was executed from visual mode or not. POSIX.12008 requires conformance to historical practice. Historically, the +command specified to the edit and next commands was delimited by the first <blank>, and there was no way to quote them. For consistency, POSIX.12008 requires that the usual ex backslash quoting be provided. Historically, specifying the +command argument to the edit command required a filename to be specified as well; for example, :edit +100 would always fail. For consistency and simplicity of specification, POSIX.12008 does not permit this usage to fail for that reason. Historically, only the cursor position of the last file edited was remembered by the editor. POSIX.12008 requires that this be supported; however, implementations are permitted to remember and restore the cursor position for any file previously edited. File Historical versions of the ex editor file command displayed a current line and number of lines in the edit buffer of 0 when the file was empty, while the vi <control>G command displayed a current line and number of lines in the edit buffer of 1 in the same situation. POSIX.12008 does not permit this discrepancy, instead requiring that a message be displayed indicating that the file is empty. Global The two-pass operation of the global and v commands is not intended to imply implementation, only the required result of the operation. The current line and column are set as specified for the individual ex commands. This requirement is cumulative; that is, the current line and column must track across all the commands executed by the global or v commands. Insert See the RATIONALE for the append command. Historically, insert could not be used with an address of zero; that is, not when the edit buffer was empty. POSIX.12008 requires that this command behave consistently with the append command. Join The action of the join command in relation to the special characters is only defined for the POSIX locale because the correct amount of white space after a period varies; in Japanese none is required, in French only a single space, and so on. List The historical output of the list command was potentially ambiguous. The standard developers believed correcting this to be more important than adhering to historical practice, and POSIX.12008 requires unambiguous output. Map Historically, command mode maps only applied to command names; for example, if the character 'x' was mapped to 'y', the command fx searched for the 'x' character, not the 'y' character. POSIX.12008 requires this behavior. Historically, entering <control>V as the first character of a vi command was an error. Several implementations have extended the semantics of vi such that <control>V means that the subsequent command character is not mapped. This is permitted, but not required, by POSIX.12008. Regardless, using <control>V to escape the second or later character in a sequence of characters that might match a map command, or any character in text input mode, is historical practice, and stops the entered keys from matching a map. POSIX.12008 requires conformance to historical practice. Historical implementations permitted digits to be used as a map command lhs, but then ignored the map. POSIX.12008 requires that the mapped digits not be ignored. The historical implementation of the map command did not permit map commands that were more than a single character in length if the first character was printable. This behavior is permitted, but not required, by POSIX.12008. Historically, mapped characters were remapped unless the remap edit option was not set, or the prefix of the mapped characters matched the mapping characters; for example, in the map: :map ab abcd the characters "ab" were used as is and were not remapped, but the characters "cd" were mapped if appropriate. This can cause infinite loops in the vi mapping mechanisms. POSIX.12008 requires conformance to historical practice, and that such loops be interruptible. Text input maps had the same problems with expanding the lhs for the ex map! and unmap! command as did the ex abbreviate and unabbreviate commands. See the RATIONALE for the ex abbreviate command. POSIX.12008 requires similar modification of some historical practice for the map and unmap commands, as described for the abbreviate and unabbreviate commands. Historically, maps that were subsets of other maps behaved differently depending on the order in which they were defined. For example: :map! ab short :map! abc long would always translate the characters "ab" to "short", regardless of how fast the characters "abc" were entered. If the entry order was reversed: :map! abc long :map! ab short the characters "ab" would cause the editor to pause, waiting for the completing 'c' character, and the characters might never be mapped to "short". For consistency and simplicity of specification, POSIX.12008 requires that the shortest match be used at all times. The length of time the editor spends waiting for the characters to complete the lhs is unspecified because the timing capabilities of systems are often inexact and variable, and it may depend on other factors such as the speed of the connection. The time should be long enough for the user to be able to complete the sequence, but not long enough for the user to have to wait. Some implementations of vi have added a keytime option, which permits users to set the number of 0,1 seconds the editor waits for the completing characters. Because mapped terminal function and cursor keys tend to start with an <ESC> character, and <ESC> is the key ending vi text input mode, maps starting with <ESC> characters are generally exempted from this timeout period, or, at least timed out differently. Mark Historically, users were able to set the ``previous context'' marks explicitly. In addition, the ex commands '' and '` and the vi commands '', ``, `', and '` all referred to the same mark. In addition, the previous context marks were not set if the command, with which the address setting the mark was associated, failed. POSIX.12008 requires conformance to historical practice. Historically, if marked lines were deleted, the mark was also deleted, but would reappear if the change was undone. POSIX.12008 requires conformance to historical practice. The description of the special events that set the ` and ' marks matches historical practice. For example, historically the command /a/,/b/ did not set the ` and ' marks, but the command /a/,/b/delete did. Next Historically, any ex command could be entered as a +command argument to the next command, although some (for example, insert and append) were known to confuse historical implementations. POSIX.12008 requires that any command be permitted and that it behave as specified. The next command can accept more than one file, so usage such as: next `ls [abc] ` is valid; it need not be valid for the edit or read commands, for example, because they expect only one filename. Historically, the next command behaved differently from the :rewind command in that it ignored the force flag if the autowrite flag was set. For consistency, POSIX.12008 does not permit this behavior. Historically, the next command positioned the cursor as if the file had never been edited before, regardless. POSIX.12008 does not permit this behavior, for consistency with the edit command. Implementations wanting to provide a counterpart to the next command that edited the previous file have used the command prev[ious], which takes no file argument. POSIX.12008 does not require this command. Open Historically, the open command would fail if the open edit option was not set. POSIX.12008 does not mention the open edit option and does not require this behavior. Some historical implementations do not permit entering open mode from open or visual mode, only from ex mode. For consistency, POSIX.12008 does not permit this behavior. Historically, entering open mode from the command line (that is, vi +open) resulted in anomalous behaviors; for example, the ex file and set commands, and the vi command <control>G did not work. For consistency, POSIX.12008 does not permit this behavior. Historically, the open command only permitted '/' characters to be used as the search pattern delimiter. For consistency, POSIX.12008 requires that the search delimiters used by the s, global, and v commands be accepted as well. Preserve The preserve command does not historically cause the file to be considered unmodified for the purposes of future commands that may exit the editor. POSIX.12008 requires conformance to historical practice. Historical documentation stated that mail was not sent to the user when preserve was executed; however, historical implementations did send mail in this case. POSIX.12008 requires conformance to the historical implementations. Print The writing of NUL by the print command is not specified as a special case because the standard developers did not want to require ex to support NUL characters. Historically, characters were displayed using the ARPA standard mappings, which are as follows: 1. Printable characters are left alone. 2. Control characters less than \177 are represented as '^' followed by the character offset from the '@' character in the ASCII map; for example, \007 is represented as '^G'. 3. \177 is represented as '^' followed by '?'. The display of characters having their eighth bit set was less standard. Existing implementations use hex (0x00), octal (\000), and a meta-bit display. (The latter displayed bytes that had their eighth bit set as the two characters "M-" followed by the seven-bit display as described above.) The latter probably has the best claim to historical practice because it was used for the -v option of 4 BSD and 4 BSD-derived versions of the cat utility since 1980. No specific display format is required by POSIX.12008. Explicit dependence on the ASCII character set has been avoided where possible, hence the use of the phrase an ``implementation- defined multi-character sequence'' for the display of non- printable characters in preference to the historical usage of, for instance, "^I" for the <tab>. Implementations are encouraged to conform to historical practice in the absence of any strong reason to diverge. Historically, all ex commands beginning with the letter 'p' could be entered using capitalized versions of the commands; for example, P[rint], Pre[serve], and Pu[t] were all valid command names. POSIX.12008 permits, but does not require, this historical practice because capital forms of the commands are used by some implementations for other purposes. Put Historically, an ex put command, executed from open or visual mode, was the same as the open or visual mode P command, if the buffer was named and was cut in character mode, and the same as the p command if the buffer was named and cut in line mode. If the unnamed buffer was the source of the text, the entire line from which the text was taken was usually put, and the buffer was handled as if in line mode, but it was possible to get extremely anomalous behavior. In addition, using the Q command to switch into ex mode, and then doing a put often resulted in errors as well, such as appending text that was unrelated to the (supposed) contents of the buffer. For consistency and simplicity of specification, POSIX.12008 does not permit these behaviors. All ex put commands are required to operate in line mode, and the contents of the buffers are not altered by changing the mode of the editor. Read Historically, an ex read command executed from open or visual mode, executed in an empty file, left an empty line as the first line of the file. For consistency and simplicity of specification, POSIX.12008 does not permit this behavior. Historically, a read in open or visual mode from a program left the cursor at the last line read in, not the first. For consistency, POSIX.12008 does not permit this behavior. Historical implementations of ex were unable to undo read commands that read from the output of a program. For consistency, POSIX.12008 does not permit this behavior. Historically, the ex and vi message after a successful read or write command specified ``characters'', not ``bytes''. POSIX.12008 requires that the number of bytes be displayed, not the number of characters, because it may be difficult in multi- byte implementations to determine the number of characters read. Implementations are encouraged to clarify the message displayed to the user. Historically, reads were not permitted on files other than type regular, except that FIFO files could be read (probably only because they did not exist when ex and vi were originally written). Because the historical ex evaluated read! and read ! equivalently, there can be no optional way to force the read. POSIX.12008 permits, but does not require, this behavior. Recover Some historical implementations of the editor permitted users to recover the edit buffer contents from a previous edit session, and then exit without saving those contents (or explicitly discarding them). The intent of POSIX.12008 in requiring that the edit buffer be treated as already modified is to prevent this user error. Rewind Historical implementations supported the rewind command when the user was editing the first file in the list; that is, the file that the rewind command would edit. POSIX.12008 requires conformance to historical practice. Substitute Historically, ex accepted an r option to the s command. The effect of the r option was to use the last regular expression used in any command as the pattern, the same as the ~ command. The r option is not required by POSIX.12008. Historically, the c and g options were toggled; for example, the command :s/abc/def/ was the same as s/abc/def/ccccgggg. For simplicity of specification, POSIX.12008 does not permit this behavior. The tilde command is often used to replace the last search RE. For example, in the sequence: s/red/blue/ /green ~ the ~ command is equivalent to: s/green/blue/ Historically, ex accepted all of the following forms: s/abc/def/ s/abc/def s/abc/ s/abc POSIX.12008 requires conformance to this historical practice. The s command presumes that the '^' character only occupies a single column in the display. Much of the ex and vi specification presumes that the <space> only occupies a single column in the display. There are no known character sets for which this is not true. Historically, the final column position for the substitute commands was based on previous column movements; a search for a pattern followed by a substitution would leave the column position unchanged, while a 0 command followed by a substitution would change the column position to the first non-<blank>. For consistency and simplicity of specification, POSIX.12008 requires that the final column position always be set to the first non-<blank>. Set Historical implementations redisplayed all of the options for each occurrence of the all keyword. POSIX.12008 permits, but does not require, this behavior. Tag No requirement is made as to where ex and vi shall look for the file referenced by the tag entry. Historical practice has been to look for the path found in the tags file, based on the current directory. A useful extension found in some implementations is to look based on the directory containing the tags file that held the entry, as well. No requirement is made as to which reference for the tag in the tags file is used. This is deliberate, in order to permit extensions such as multiple entries in a tags file for a tag. Because users often specify many different tags files, some of which need not be relevant or exist at any particular time, POSIX.12008 requires that error messages about problem tags files be displayed only if the requested tag is not found, and then, only once for each time that the tag edit option is changed. The requirement that the current edit buffer be unmodified is only necessary if the file indicated by the tag entry is not the same as the current file (as defined by the current pathname). Historically, the file would be reloaded if the filename had changed, as well as if the filename was different from the current pathname. For consistency and simplicity of specification, POSIX.12008 does not permit this behavior, requiring that the name be the only factor in the decision. Historically, vi only searched for tags in the current file from the current cursor to the end of the file, and therefore, if the wrapscan option was not set, tags occurring before the current cursor were not found. POSIX.12008 considers this a bug, and implementations are required to search for the first occurrence in the file, regardless. Undo The undo description deliberately uses the word ``modified''. The undo command is not intended to undo commands that replace the contents of the edit buffer, such as edit, next, tag, or recover. Cursor positioning after the undo command was inconsistent in the historical vi, sometimes attempting to restore the original cursor position (global, undo, and v commands), and sometimes, in the presence of maps, placing the cursor on the last line added or changed instead of the first. POSIX.12008 requires a simplified behavior for consistency and simplicity of specification. Version The version command cannot be exactly specified since there is no widely-accepted definition of what the version information should contain. Implementations are encouraged to do something reasonably intelligent. Write Historically, the ex and vi message after a successful read or write command specified ``characters'', not ``bytes''. POSIX.12008 requires that the number of bytes be displayed, not the number of characters because it may be difficult in multi- byte implementations to determine the number of characters written. Implementations are encouraged to clarify the message displayed to the user. Implementation-defined tests are permitted so that implementations can make additional checks; for example, for locks or file modification times. Historically, attempting to append to a nonexistent file caused an error. It has been left unspecified in POSIX.12008 to permit implementations to let the write succeed, so that the append semantics are similar to those of the historical csh. Historical vi permitted empty edit buffers to be written. However, since the way vi got around dealing with ``empty'' files was to always have a line in the edit buffer, no matter what, it wrote them as files of a single, empty line. POSIX.12008 does not permit this behavior. Historically, ex restored standard output and standard error to their values as of when ex was invoked, before writes to programs were performed. This could disturb the terminal configuration as well as be a security issue for some terminals. POSIX.12008 does not permit this, requiring that the program output be captured and displayed as if by the ex print command. Adjust Window Historically, the line count was set to the value of the scroll option if the type character was end-of-file. This feature was broken on most historical implementations long ago, however, and is not documented anywhere. For this reason, POSIX.12008 is resolutely silent. Historically, the z command was <blank>-sensitive and z + and z - did different things than z+ and z- because the type could not be distinguished from a flag. (The commands z . and z = were historically invalid.) POSIX.12008 requires conformance to this historical practice. Historically, the z command was further <blank>-sensitive in that the count could not be <blank>-delimited; for example, the commands z= 5 and z- 5 were also invalid. Because the count is not ambiguous with respect to either the type character or the flags, this is not permitted by POSIX.12008. Escape Historically, ex filter commands only read the standard output of the commands, letting standard error appear on the terminal as usual. The vi utility, however, read both standard output and standard error. POSIX.12008 requires the latter behavior for both ex and vi, for consistency. Shift Left and Shift Right Historically, it was possible to add shift characters to increase the effect of the command; for example, <<< outdented (or >>> indented) the lines 3 levels of indentation instead of the default 1. POSIX.12008 requires conformance to historical practice. <control>D Historically, the <control>D command erased the prompt, providing the user with an unbroken presentation of lines from the edit buffer. This is not required by POSIX.12008; implementations are encouraged to provide it if possible. Historically, the <control>D command took, and then ignored, a count. POSIX.12008 does not permit this behavior. Write Line Number Historically, the ex = command, when executed in ex mode in an empty edit buffer, reported 0, and from open or visual mode, reported 1. For consistency and simplicity of specification, POSIX.12008 does not permit this behavior. Execute Historically, ex did not correctly handle the inclusion of text input commands (that is, append, insert, and change) in executed buffers. POSIX.12008 does not permit this exclusion for consistency. Historically, the logical contents of the buffer being executed did not change if the buffer itself were modified by the commands being executed; that is, buffer execution did not support self- modifying code. POSIX.12008 requires conformance to historical practice. Historically, the @ command took a range of lines, and the @ buffer was executed once per line, with the current line ('.') set to each specified line. POSIX.12008 requires conformance to historical practice. Some historical implementations did not notice if errors occurred during buffer execution. This, coupled with the ability to specify a range of lines for the ex @ command, makes it trivial to cause them to drop core. POSIX.12008 requires that implementations stop buffer execution if any error occurs, if the specified line doesn't exist, or if the contents of the edit buffer itself are replaced (for example, the buffer executes the ex :edit command). Regular Expressions in ex Historical practice is that the characters in the replacement part of the last s commandthat is, those matched by entering a '~' in the regular expressionwere not further expanded by the regular expression engine. So, if the characters contained the string "a.," they would match 'a' followed by ".," and not 'a' followed by any character. POSIX.12008 requires conformance to historical practice. Edit Options in ex The following paragraphs describe the historical behavior of some edit options that were not, for whatever reason, included in POSIX.12008. Implementations are strongly encouraged to only use these names if the functionality described here is fully supported. extended The extended edit option has been used in some implementations of vi to provide extended regular expressions instead of basic regular expressions This option was omitted from POSIX.12008 because it is not widespread historical practice. flash The flash edit option historically caused the screen to flash instead of beeping on error. This option was omitted from POSIX.12008 because it is not found in some historical implementations. hardtabs The hardtabs edit option historically defined the number of columns between hardware tab settings. This option was omitted from POSIX.12008 because it was believed to no longer be generally useful. modeline The modeline (sometimes named modelines) edit option historically caused ex or vi to read the five first and last lines of the file for editor commands. This option is a security problem, and vendors are strongly encouraged to delete it from historical implementations. open The open edit option historically disallowed the ex open and visual commands. This edit option was omitted because these commands are required by POSIX.12008. optimize The optimize edit option historically expedited text throughput by setting the terminal to not do automatic <carriage-return> characters when printing more than one logical line of output. This option was omitted from POSIX.12008 because it was intended for terminals without addressable cursors, which are rarely, if ever, still used. ruler The ruler edit option has been used in some implementations of vi to present a current row/column ruler for the user. This option was omitted from POSIX.12008 because it is not widespread historical practice. sourceany The sourceany edit option historically caused ex or vi to source start-up files that were owned by users other than the user running the editor. This option is a security problem, and vendors are strongly encouraged to remove it from their implementations. timeout The timeout edit option historically enabled the (now standard) feature of only waiting for a short period before returning keys that could be part of a macro. This feature was omitted from POSIX.12008 because its behavior is now standard, it is not widely useful, and it was rarely documented. verbose The verbose edit option has been used in some implementations of vi to cause vi to output error messages for common errors; for example, attempting to move the cursor past the beginning or end of the line instead of only alerting the screen. (The historical vi only alerted the terminal and presented no message for such errors. The historical editor option terse did not select when to present error messages, it only made existing error messages more or less verbose.) This option was omitted from POSIX.12008 because it is not widespread historical practice; however, implementors are encouraged to use it if they wish to provide error messages for naive users. wraplen The wraplen edit option has been used in some implementations of vi to specify an automatic margin measured from the left margin instead of from the right margin. This is useful when multiple screen sizes are being used to edit a single file. This option was omitted from POSIX.12008 because it is not widespread historical practice; however, implementors are encouraged to use it if they add this functionality. autoindent, ai Historically, the command 0a did not do any autoindentation, regardless of the current indentation of line 1. POSIX.12008 requires that any indentation present in line 1 be used. autoprint, ap Historically, the autoprint edit option was not completely consistent or based solely on modifications to the edit buffer. Exceptions were the read command (when reading from a file, but not from a filter), the append, change, insert, global, and v commands, all of which were not affected by autoprint, and the tag command, which was affected by autoprint. POSIX.12008 requires conformance to historical practice. Historically, the autoprint option only applied to the last of multiple commands entered using <vertical-line> delimiters; for example, delete <newline> was affected by autoprint, but delete|version <newline> was not. POSIX.12008 requires conformance to historical practice. autowrite, aw Appending the '!' character to the ex next command to avoid performing an automatic write was not supported in historical implementations. POSIX.12008 requires that the behavior match the other ex commands for consistency. ignorecase, ic Historical implementations of case-insensitive matching (the ignorecase edit option) lead to counter-intuitive situations when uppercase characters were used in range expressions. Historically, the process was as follows: 1. Take a line of text from the edit buffer. 2. Convert uppercase to lowercase in text line. 3. Convert uppercase to lowercase in regular expressions, except in character class specifications. 4. Match regular expressions against text. This would mean that, with ignorecase in effect, the text: The cat sat on the mat would be matched by /^the/ but not by: /^[A-Z]he/ For consistency with other commands implementing regular expressions, POSIX.12008 does not permit this behavior. paragraphs, para The ISO POSIX2:1993 standard made the default paragraphs and sections edit options implementation-defined, arguing they were historically oriented to the UNIX system troff text formatter, and a ``portable user'' could use the {, }, [[, ]], (, and ) commands in open or visual mode and have the cursor stop in unexpected places. POSIX.12008 specifies their values in the POSIX locale because the unusual grouping (they only work when grouped into two characters at a time) means that they cannot be used for general-purpose movement, regardless. readonly Implementations are encouraged to provide the best possible information to the user as to the read-only status of the file, with the exception that they should not consider the current special privileges of the process. This provides users with a safety net because they must force the overwrite of read-only files, even when running with additional privileges. The readonly edit option specification largely conforms to historical practice. The only difference is that historical implementations did not notice that the user had set the readonly edit option in cases where the file was already marked read-only for some reason, and would therefore reinitialize the readonly edit option the next time the contents of the edit buffer were replaced. This behavior is disallowed by POSIX.12008. report The requirement that lines copied to a buffer interact differently than deleted lines is historical practice. For example, if the report edit option is set to 3, deleting 3 lines will cause a report to be written, but 4 lines must be copied before a report is written. The requirement that the ex global, v, open, undo, and visual commands present reports based on the total number of lines added or deleted during the command execution, and that commands executed by the global and v commands not present reports, is historical practice. POSIX.12008 extends historical practice by requiring that buffer execution be treated similarly. The reasons for this are two-fold. Historically, only the report by the last command executed from the buffer would be seen by the user, as each new report would overwrite the last. In addition, the standard developers believed that buffer execution had more in common with global and v commands than it did with other ex commands, and should behave similarly, for consistency and simplicity of specification. showmatch, sm The length of time the cursor spends on the matching character is unspecified because the timing capabilities of systems are often inexact and variable. The time should be long enough for the user to notice, but not long enough for the user to become annoyed. Some implementations of vi have added a matchtime option that permits users to set the number of 0,1 second intervals the cursor pauses on the matching character. showmode The showmode option has been used in some historical implementations of ex and vi to display the current editing mode when in open or visual mode. The editing modes have generally included ``command'' and ``input'', and sometimes other modes such as ``replace'' and ``change''. The string was usually displayed on the bottom line of the screen at the far right-hand corner. In addition, a preceding '*' character often denoted whether the contents of the edit buffer had been modified. The latter display has sometimes been part of the showmode option, and sometimes based on another option. This option was not available in the 4 BSD historical implementation of vi, but was viewed as generally useful, particularly to novice users, and is required by POSIX.12008. The smd shorthand for the showmode option was not present in all historical implementations of the editor. POSIX.12008 requires it, for consistency. Not all historical implementations of the editor displayed a mode string for command mode, differentiating command mode from text input mode by the absence of a mode string. POSIX.12008 permits this behavior for consistency with historical practice, but implementations are encouraged to provide a display string for both modes. slowopen Historically, the slowopen option was automatically set if the terminal baud rate was less than 1200 baud, or if the baud rate was 1200 baud and the redraw option was not set. The slowopen option had two effects. First, when inserting characters in the middle of a line, characters after the cursor would not be pushed ahead, but would appear to be overwritten. Second, when creating a new line of text, lines after the current line would not be scrolled down, but would appear to be overwritten. In both cases, ending text input mode would cause the screen to be refreshed to match the actual contents of the edit buffer. Finally, terminals that were sufficiently intelligent caused the editor to ignore the slowopen option. POSIX.12008 permits most historical behavior, extending historical practice to require slowopen behaviors if the edit option is set by the user. tags The default path for tags files is left unspecified as implementations may have their own tags implementations that do not correspond to the historical ones. The default tags option value should probably at least include the file ./tags. term Historical implementations of ex and vi ignored changes to the term edit option after the initial terminal information was loaded. This is permitted by POSIX.12008; however, implementations are encouraged to permit the user to modify their terminal type at any time. terse Historically, the terse edit option optionally provided a shorter, less descriptive error message, for some error messages. This is permitted, but not required, by POSIX.12008. Historically, most common visual mode errors (for example, trying to move the cursor past the end of a line) did not result in an error message, but simply alerted the terminal. Implementations wishing to provide messages for novice users are urged to do so based on the edit option verbose, and not terse. window In historical implementations, the default for the window edit option was based on the baud rate as follows: 1. If the baud rate was less than 1200, the edit option w300 set the window value; for example, the line: set w300=12 would set the window option to 12 if the baud rate was less than 1200. 2. If the baud rate was equal to 1200, the edit option w1200 set the window value. 3. If the baud rate was greater than 1200, the edit option w9600 set the window value. The w300, w1200, and w9600 options do not appear in POSIX.12008 because of their dependence on specific baud rates. In historical implementations, the size of the window displayed by various commands was related to, but not necessarily the same as, the window edit option. For example, the size of the window was set by the ex command visual 10, but it did not change the value of the window edit option. However, changing the value of the window edit option did change the number of lines that were displayed when the screen was repainted. POSIX.12008 does not permit this behavior in the interests of consistency and simplicity of specification, and requires that all commands that change the number of lines that are displayed do it by setting the value of the window edit option. wrapmargin, wm Historically, the wrapmargin option did not affect maps inserting characters that also had associated counts; for example :map K 5aABC DEF. Unfortunately, there are widely used maps that depend on this behavior. For consistency and simplicity of specification, POSIX.12008 does not permit this behavior. Historically, wrapmargin was calculated using the column display width of all characters on the screen. For example, an implementation using "^I" to represent <tab> characters when the list edit option was set, where '^' and 'I' each took up a single column on the screen, would calculate the wrapmargin based on a value of 2 for each <tab>. The number edit option similarly changed the effective length of the line as well. POSIX.12008 requires conformance to historical practice. Earlier versions of this standard allowed for implementations with bytes other than eight bits, but this has been modified in this version. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.9.1.1, Command Search and Execution, ctags(1p), ed(1p), sed(1p), sh(1p), stty(1p), vi(1p) The Base Definitions volume of POSIX.12017, Table 5-1, Escape Sequences and Associated Actions, Chapter 8, Environment Variables, Section 9.3, Basic Regular Expressions, Section 12.2, Utility Syntax Guidelines The System Interfaces volume of POSIX.12017, access(3p) COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 EX(1P) Pages that refer to this page: ed(1p), more(1p), vi(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ex\n\n> Command-line text editor.\n> See also: `vim`.\n> More information: <https://www.vim.org>.\n\n- Open a file:\n\n`ex {{path/to/file}}`\n\n- Save and Quit:\n\n`wq<Enter>`\n\n- Undo the last operation:\n\n`undo<Enter>`\n\n- Search for a pattern in the file:\n\n`/{{search_pattern}}<Enter>`\n\n- Perform a regular expression substitution in the whole file:\n\n`%s/{{regular_expression}}/{{replacement}}/g<Enter>`\n\n- Insert text:\n\n`i<Enter>{{text}}<C-c>`\n\n- Switch to Vim:\n\n`visual<Enter>`\n
exec
exec(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training exec(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT EXEC(1P) POSIX Programmer's Manual EXEC(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top exec execute commands and open, close, or copy file descriptors SYNOPSIS top exec [command [argument...]] DESCRIPTION top The exec utility shall open, close, and/or copy file descriptors as specified by any redirections as part of the command. If exec is specified without command or arguments, and any file descriptors with numbers greater than 2 are opened with associated redirection statements, it is unspecified whether those file descriptors remain open when the shell invokes another utility. Scripts concerned that child shells could misuse open file descriptors can always close them explicitly, as shown in one of the following examples. If exec is specified with command, it shall replace the shell with command without creating a new process. If arguments are specified, they shall be arguments to command. Redirection affects the current shell execution environment. OPTIONS top None. OPERANDS top See the DESCRIPTION. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top None. ASYNCHRONOUS EVENTS top Default. STDOUT top Not used. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top If command is specified, exec shall not return to the shell; rather, the exit status of the process shall be the exit status of the program implementing command, which overlaid the shell. If command is not found, the exit status shall be 127. If command is found, but it is not an executable utility, the exit status shall be 126. If a redirection error occurs (see Section 2.8.1, Consequences of Shell Errors), the shell shall exit with a value in the range 1-125. Otherwise, exec shall return a zero exit status. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top None. EXAMPLES top Open readfile as file descriptor 3 for reading: exec 3< readfile Open writefile as file descriptor 4 for writing: exec 4> writefile Make file descriptor 5 a copy of file descriptor 0: exec 5<&0 Close file descriptor 3: exec 3<&- Cat the file maggie by replacing the current shell with the cat utility: exec cat maggie RATIONALE top Most historical implementations were not conformant in that: foo=bar exec cmd did not pass foo to cmd. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.14, Special Built-In Utilities COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 EXEC(1P) Pages that refer to this page: fcntl.h(0p), stdarg.h(0p), unistd.h(0p), awk(1p), c99(1p), command(1p), fort77(1p), make(1p), newgrp(1p), sh(1p), xargs(1p), aio_error(3p), aio_read(3p), aio_return(3p), aio_write(3p), alarm(3p), atexit(3p), chmod(3p), close(3p), confstr(3p), environ(3p), exit(3p), fcntl(3p), fexecve(3p), fork(3p), fstatvfs(3p), getenv(3p), getitimer(3p), getopt(3p), getpgid(3p), getpgrp(3p), getpid(3p), getppid(3p), getrlimit(3p), getsid(3p), glob(3p), lio_listio(3p), mknod(3p), mlock(3p), mlockall(3p), mmap(3p), nice(3p), open(3p), posix_spawn(3p), posix_trace_create(3p), posix_trace_event(3p), posix_trace_eventid_equal(3p), posix_typed_mem_open(3p), pthread_atfork(3p), pthread_sigmask(3p), putenv(3p), readdir(3p), semop(3p), setegid(3p), setenv(3p), seteuid(3p), setgid(3p), setlocale(3p), setpgid(3p), setpgrp(3p), setregid(3p), setuid(3p), shmat(3p), shmdt(3p), shm_open(3p), sigaction(3p), sigaltstack(3p), sighold(3p), signal(3p), sigpending(3p), system(3p), times(3p), ulimit(3p), umask(3p), wait(3p), waitid(3p), wordexp(3p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# exec\n\n> Execute a command without creating a child process.\n> More information: <https://www.gnu.org/software/bash/manual/bash.html#index-exec>.\n\n- Execute a specific command:\n\n`exec {{command -with -flags}}`\n\n- Execute a command with a (mostly) empty environment:\n\n`exec -c {{command -with -flags}}`\n\n- Execute a command as a login shell:\n\n`exec -l {{command -with -flags}}`\n\n- Execute a command with a different name:\n\n`exec -a {{name}} {{command -with -flags}}`\n
exit
exit(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training exit(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT EXIT(1P) POSIX Programmer's Manual EXIT(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top exit cause the shell to exit SYNOPSIS top exit [n] DESCRIPTION top The exit utility shall cause the shell to exit from its current execution environment with the exit status specified by the unsigned decimal integer n. If the current execution environment is a subshell environment, the shell shall exit from the subshell environment with the specified exit status and continue in the environment from which that subshell environment was invoked; otherwise, the shell utility shall terminate with the specified exit status. If n is specified, but its value is not between 0 and 255 inclusively, the exit status is undefined. A trap on EXIT shall be executed before the shell terminates, except when the exit utility is invoked in that trap itself, in which case the shell shall exit immediately. OPTIONS top None. OPERANDS top See the DESCRIPTION. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top None. ASYNCHRONOUS EVENTS top Default. STDOUT top Not used. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The exit status shall be n, if specified, except that the behavior is unspecified if n is not an unsigned decimal integer or is greater than 255. Otherwise, the value shall be the exit value of the last command executed, or zero if no command was executed. When exit is executed in a trap action, the last command is considered to be the command that executed immediately preceding the trap action. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top None. EXAMPLES top Exit with a true value: exit 0 Exit with a false value: exit 1 Propagate error handling from within a subshell: ( command1 || exit 1 command2 || exit 1 exec command3 ) > outputfile || exit 1 echo "outputfile created successfully" RATIONALE top As explained in other sections, certain exit status values have been reserved for special uses and should be used by applications only for those purposes: 126 A file to be executed was found, but it was not an executable utility. 127 A utility to be executed was not found. >128 A command was interrupted by a signal. The behavior of exit when given an invalid argument or unknown option is unspecified, because of differing practices in the various historical implementations. A value larger than 255 might be truncated by the shell, and be unavailable even to a parent process that uses waitid() to get the full exit value. It is recommended that implementations that detect any usage error should cause a non-zero exit status (or, if the shell is interactive and the error does not cause the shell to abort, store a non-zero value in "$?"), but even this was not done historically in all shells. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.14, Special Built-In Utilities COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 EXIT(1P) Pages that refer to this page: return(1p), sh(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# exit\n\n> Exit the shell.\n> More information: <https://manned.org/exit.1posix>.\n\n- Exit with the exit status of the most recently executed command:\n\n`exit`\n\n- Exit with a specific exit status:\n\n`exit {{exit_code}}`\n
expand
expand(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training expand(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON EXPAND(1) User Commands EXPAND(1) NAME top expand - convert tabs to spaces SYNOPSIS top expand [OPTION]... [FILE]... DESCRIPTION top Convert tabs in each FILE to spaces, writing to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -i, --initial do not convert tabs after non blanks -t, --tabs=N have tabs N characters apart, not 8 -t, --tabs=LIST use comma separated list of tab positions. The last specified position can be prefixed with '/' to specify a tab size to use after the last explicitly specified tab stop. Also a prefix of '+' can be used to align remaining tab stops relative to the last specified tab stop instead of the first column --help display this help and exit --version output version information and exit AUTHOR top Written by David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top unexpand(1) Full documentation <https://www.gnu.org/software/coreutils/expand> or available locally via: info '(coreutils) expand invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 EXPAND(1) Pages that refer to this page: col(1), colrm(1), unexpand(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# expand\n\n> Convert tabs to spaces.\n> More information: <https://www.gnu.org/software/coreutils/expand>.\n\n- Convert tabs in each file to spaces, writing to `stdout`:\n\n`expand {{path/to/file}}`\n\n- Convert tabs to spaces, reading from `stdin`:\n\n`expand`\n\n- Do not convert tabs after non blanks:\n\n`expand -i {{path/to/file}}`\n\n- Have tabs a certain number of characters apart, not 8:\n\n`expand -t {{number}} {{path/to/file}}`\n\n- Use a comma separated list of explicit tab positions:\n\n`expand -t {{1,4,6}}`\n
expect
expect(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training expect(1) Linux manual page NAME | SYNOPSIS | INTRODUCTION | USAGE | COMMANDS | LIBRARIES | PRETTY-PRINTING | EXAMPLES | CAVEATS | BUGS | EXPECT HINTS | SEE ALSO | AUTHOR | ACKNOWLEDGMENTS | COLOPHON EXPECT(1) General Commands Manual EXPECT(1) NAME top expect - programmed dialogue with interactive programs, Version 5 SYNOPSIS top expect [ -dDinN ] [ -c cmds ] [ [ -[f|b] ] cmdfile ] [ args ] INTRODUCTION top Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high- level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script. Expectk is a mixture of Expect and Tk. It behaves just like Expect and Tk's wish. Expect can also be used directly in C or C++ (that is, without Tcl). See libexpect(3). The name "Expect" comes from the idea of send/expect sequences popularized by uucp, kermit and other modem control programs. However unlike uucp, Expect is generalized so that it can be run as a user-level command with any program and task in mind. Expect can actually talk to several programs at the same time. For example, here are some things Expect can do: Cause your computer to dial you back, so that you can login without paying for the call. Start a game (e.g., rogue) and if the optimal configuration doesn't appear, restart it (again and again) until it does, then hand over control to you. Run fsck, and in response to its questions, answer "yes", "no" or give control back to you, based on predetermined criteria. Connect to another network or BBS (e.g., MCI Mail, CompuServe) and automatically retrieve your mail so that it appears as if it was originally sent to your local system. Carry environment variables, current directory, or any kind of information across rlogin, telnet, tip, su, chgrp, etc. There are a variety of reasons why the shell cannot perform these tasks. (Try, you'll see.) All are possible with Expect. In general, Expect is useful for running any program which requires interaction between the program and the user. All that is necessary is that the interaction can be characterized programmatically. Expect can also give the user back control (without halting the program being controlled) if desired. Similarly, the user can return control to the script at any time. USAGE top Expect reads cmdfile for a list of commands to execute. Expect may also be invoked implicitly on systems which support the #! notation by marking the script executable, and making the first line in your script: #!/usr/local/bin/expect -f Of course, the path must accurately describe where Expect lives. /usr/local/bin is just an example. The -c flag prefaces a command to be executed before any in the script. The command should be quoted to prevent being broken up by the shell. This option may be used multiple times. Multiple commands may be executed with a single -c by separating them with semicolons. Commands are executed in the order they appear. (When using Expectk, this option is specified as -command.) The -d flag enables some diagnostic output, which primarily reports internal activity of commands such as expect and interact. This flag has the same effect as "exp_internal 1" at the beginning of an Expect script, plus the version of Expect is printed. (The strace command is useful for tracing statements, and the trace command is useful for tracing variable assignments.) (When using Expectk, this option is specified as -diag.) The -D flag enables an interactive debugger. An integer value should follow. The debugger will take control before the next Tcl procedure if the value is non-zero or if a ^C is pressed (or a breakpoint is hit, or other appropriate debugger command appears in the script). See the README file or SEE ALSO (below) for more information on the debugger. (When using Expectk, this option is specified as -Debug.) The -f flag prefaces a file from which to read commands from. The flag itself is optional as it is only useful when using the #! notation (see above), so that other arguments may be supplied on the command line. (When using Expectk, this option is specified as -file.) By default, the command file is read into memory and executed in its entirety. It is occasionally desirable to read files one line at a time. For example, stdin is read this way. In order to force arbitrary files to be handled this way, use the -b flag. (When using Expectk, this option is specified as -buffer.) Note that stdio-buffering may still take place however this shouldn't cause problems when reading from a fifo or stdin. If the string "-" is supplied as a filename, standard input is read instead. (Use "./-" to read from a file actually named "-".) The -i flag causes Expect to interactively prompt for commands instead of reading them from a file. Prompting is terminated via the exit command or upon EOF. See interpreter (below) for more information. -i is assumed if neither a command file nor -c is used. (When using Expectk, this option is specified as -interactive.) -- may be used to delimit the end of the options. This is useful if you want to pass an option-like argument to your script without it being interpreted by Expect. This can usefully be placed in the #! line to prevent any flag-like interpretation by Expect. For example, the following will leave the original arguments (including the script name) in the variable argv. #!/usr/local/bin/expect -- Note that the usual getopt(3) and execve(2) conventions must be observed when adding arguments to the #! line. The file $exp_library/expect.rc is sourced automatically if present, unless the -N flag is used. (When using Expectk, this option is specified as -NORC.) Immediately after this, the file ~/.expect.rc is sourced automatically, unless the -n flag is used. If the environment variable DOTDIR is defined, it is treated as a directory and .expect.rc is read from there. (When using Expectk, this option is specified as -norc.) This sourcing occurs only after executing any -c flags. -v causes Expect to print its version number and exit. (The corresponding flag in Expectk, which uses long flag names, is -version.) Optional args are constructed into a list and stored in the variable named argv. argc is initialized to the length of argv. argv0 is defined to be the name of the script (or binary if no script is used). For example, the following prints out the name of the script and the first three arguments: send_user "$argv0 [lrange $argv 0 2]\n" COMMANDS top Expect uses Tcl (Tool Command Language). Tcl provides control flow (e.g., if, for, break), expression evaluation and several other features such as recursion, procedure definition, etc. Commands used here but not defined (e.g., set, if, exec) are Tcl commands (see tcl(3)). Expect supports additional commands, described below. Unless otherwise specified, commands return the empty string. Commands are listed alphabetically so that they can be quickly located. However, new users may find it easier to start by reading the descriptions of spawn, send, expect, and interact, in that order. Note that the best introduction to the language (both Expect and Tcl) is provided in the book "Exploring Expect" (see SEE ALSO below). Examples are included in this man page but they are very limited since this man page is meant primarily as reference material. Note that in the text of this man page, "Expect" with an uppercase "E" refers to the Expect program while "expect" with a lower-case "e" refers to the expect command within the Expect program.) close [-slave] [-onexec 0|1] [-i spawn_id] closes the connection to the current process. Most interactive programs will detect EOF on their stdin and exit; thus close usually suffices to kill the process as well. The -i flag declares the process to close corresponding to the named spawn_id. Both expect and interact will detect when the current process exits and implicitly do a close. But if you kill the process by, say, "exec kill $pid", you will need to explicitly call close. The -onexec flag determines whether the spawn id will be closed in any new spawned processes or if the process is overlayed. To leave a spawn id open, use the value 0. A non-zero integer value will force the spawn closed (the default) in any new processes. The -slave flag closes the slave associated with the spawn id. (See "spawn -pty".) When the connection is closed, the slave is automatically closed as well if still open. No matter whether the connection is closed implicitly or explicitly, you should call wait to clear up the corresponding kernel process slot. close does not call wait since there is no guarantee that closing a process connection will cause it to exit. See wait below for more info. debug [[-now] 0|1] controls a Tcl debugger allowing you to step through statements, set breakpoints, etc. With no arguments, a 1 is returned if the debugger is not running, otherwise a 0 is returned. With a 1 argument, the debugger is started. With a 0 argument, the debugger is stopped. If a 1 argument is preceded by the -now flag, the debugger is started immediately (i.e., in the middle of the debug command itself). Otherwise, the debugger is started with the next Tcl statement. The debug command does not change any traps. Compare this to starting Expect with the -D flag (see above). See the README file or SEE ALSO (below) for more information on the debugger. disconnect disconnects a forked process from the terminal. It continues running in the background. The process is given its own process group (if possible). Standard I/O is redirected to /dev/null. The following fragment uses disconnect to continue running the script in the background. if {[fork]!=0} exit disconnect . . . The following script reads a password, and then runs a program every hour that demands a password each time it is run. The script supplies the password so that you only have to type it once. (See the stty command which demonstrates how to turn off password echoing.) send_user "password?\ " expect_user -re "(.*)\n" for {} 1 {} { if {[fork]!=0} {sleep 3600;continue} disconnect spawn priv_prog expect Password: send "$expect_out(1,string)\r" . . . exit } An advantage to using disconnect over the shell asynchronous process feature (&) is that Expect can save the terminal parameters prior to disconnection, and then later apply them to new ptys. With &, Expect does not have a chance to read the terminal's parameters since the terminal is already disconnected by the time Expect receives control. exit [-opts] [status] causes Expect to exit or otherwise prepare to do so. The -onexit flag causes the next argument to be used as an exit handler. Without an argument, the current exit handler is returned. The -noexit flag causes Expect to prepare to exit but stop short of actually returning control to the operating system. The user-defined exit handler is run as well as Expect's own internal handlers. No further Expect commands should be executed. This is useful if you are running Expect with other Tcl extensions. The current interpreter (and main window if in the Tk environment) remain so that other Tcl extensions can clean up. If Expect's exit is called again (however this might occur), the handlers are not rerun. Upon exiting, all connections to spawned processes are closed. Closure will be detected as an EOF by spawned processes. exit takes no other actions beyond what the normal _exit(2) procedure does. Thus, spawned processes that do not check for EOF may continue to run. (A variety of conditions are important to determining, for example, what signals a spawned process will be sent, but these are system-dependent, typically documented under exit(3).) Spawned processes that continue to run will be inherited by init. status (or 0 if not specified) is returned as the exit status of Expect. exit is implicitly executed if the end of the script is reached. exp_continue [-continue_timer] The command exp_continue allows expect itself to continue executing rather than returning as it normally would. By default exp_continue resets the timeout timer. The -continue_timer flag prevents timer from being restarted. (See expect for more information.) exp_internal [-f file] value causes further commands to send diagnostic information internal to Expect to stderr if value is non-zero. This output is disabled if value is 0. The diagnostic information includes every character received, and every attempt made to match the current output against the patterns. If the optional file is supplied, all normal and debugging output is written to that file (regardless of the value of value). Any previous diagnostic output file is closed. The -info flag causes exp_internal to return a description of the most recent non-info arguments given. exp_open [args] [-i spawn_id] returns a Tcl file identifier that corresponds to the original spawn id. The file identifier can then be used as if it were opened by Tcl's open command. (The spawn id should no longer be used. A wait should not be executed. The -leaveopen flag leaves the spawn id open for access through Expect commands. A wait must be executed on the spawn id. exp_pid [-i spawn_id] returns the process id corresponding to the currently spawned process. If the -i flag is used, the pid returned corresponds to that of the given spawn id. exp_send is an alias for send. exp_send_error is an alias for send_error. exp_send_log is an alias for send_log. exp_send_tty is an alias for send_tty. exp_send_user is an alias for send_user. exp_version [[-exit] version] is useful for assuring that the script is compatible with the current version of Expect. With no arguments, the current version of Expect is returned. This version may then be encoded in your script. If you actually know that you are not using features of recent versions, you can specify an earlier version. Versions consist of three numbers separated by dots. First is the major number. Scripts written for versions of Expect with a different major number will almost certainly not work. exp_version returns an error if the major numbers do not match. Second is the minor number. Scripts written for a version with a greater minor number than the current version may depend upon some new feature and might not run. exp_version returns an error if the major numbers match, but the script minor number is greater than that of the running Expect. Third is a number that plays no part in the version comparison. However, it is incremented when the Expect software distribution is changed in any way, such as by additional documentation or optimization. It is reset to 0 upon each new minor version. With the -exit flag, Expect prints an error and exits if the version is out of date. expect [[-opts] pat1 body1] ... [-opts] patn [bodyn] waits until one of the patterns matches the output of a spawned process, a specified time period has passed, or an end-of-file is seen. If the final body is empty, it may be omitted. Patterns from the most recent expect_before command are implicitly used before any other patterns. Patterns from the most recent expect_after command are implicitly used after any other patterns. If the arguments to the entire expect statement require more than one line, all the arguments may be "braced" into one so as to avoid terminating each line with a backslash. In this one case, the usual Tcl substitutions will occur despite the braces. If a pattern is the keyword eof, the corresponding body is executed upon end-of-file. If a pattern is the keyword timeout, the corresponding body is executed upon timeout. If no timeout keyword is used, an implicit null action is executed upon timeout. The default timeout period is 10 seconds but may be set, for example to 30, by the command "set timeout 30". An infinite timeout may be designated by the value -1. If a pattern is the keyword default, the corresponding body is executed upon either timeout or end- of-file. If a pattern matches, then the corresponding body is executed. expect returns the result of the body (or the empty string if no pattern matched). In the event that multiple patterns match, the one appearing first is used to select a body. Each time new output arrives, it is compared to each pattern in the order they are listed. Thus, you may test for absence of a match by making the last pattern something guaranteed to appear, such as a prompt. In situations where there is no prompt, you must use timeout (just like you would if you were interacting manually). Patterns are specified in three ways. By default, patterns are specified as with Tcl's string match command. (Such patterns are also similar to C-shell regular expressions usually referred to as "glob" patterns). The -gl flag may may be used to protect patterns that might otherwise match expect flags from doing so. Any pattern beginning with a "-" should be protected this way. (All strings starting with "-" are reserved for future options.) For example, the following fragment looks for a successful login. (Note that abort is presumed to be a procedure defined elsewhere in the script.) expect { busy {puts busy\n ; exp_continue} failed abort "invalid password" abort timeout abort connected } Quotes are necessary on the fourth pattern since it contains a space, which would otherwise separate the pattern from the action. Patterns with the same action (such as the 3rd and 4th) require listing the actions again. This can be avoid by using regexp-style patterns (see below). More information on forming glob-style patterns can be found in the Tcl manual. Regexp-style patterns follow the syntax defined by Tcl's regexp (short for "regular expression") command. regexp patterns are introduced with the flag -re. The previous example can be rewritten using a regexp as: expect { busy {puts busy\n ; exp_continue} -re "failed|invalid password" abort timeout abort connected } Both types of patterns are "unanchored". This means that patterns do not have to match the entire string, but can begin and end the match anywhere in the string (as long as everything else matches). Use ^ to match the beginning of a string, and $ to match the end. Note that if you do not wait for the end of a string, your responses can easily end up in the middle of the string as they are echoed from the spawned process. While still producing correct results, the output can look unnatural. Thus, use of $ is encouraged if you can exactly describe the characters at the end of a string. Note that in many editors, the ^ and $ match the beginning and end of lines respectively. However, because expect is not line oriented, these characters match the beginning and end of the data (as opposed to lines) currently in the expect matching buffer. (Also, see the note below on "system indigestion.") The -ex flag causes the pattern to be matched as an "exact" string. No interpretation of *, ^, etc is made (although the usual Tcl conventions must still be observed). Exact patterns are always unanchored. The -nocase flag causes uppercase characters of the output to compare as if they were lowercase characters. The pattern is not affected. While reading output, more than 2000 bytes can force earlier bytes to be "forgotten". This may be changed with the function match_max. (Note that excessively large values can slow down the pattern matcher.) If patlist is full_buffer, the corresponding body is executed if match_max bytes have been received and no other patterns have matched. Whether or not the full_buffer keyword is used, the forgotten characters are written to expect_out(buffer). If patlist is the keyword null, and nulls are allowed (via the remove_nulls command), the corresponding body is executed if a single ASCII 0 is matched. It is not possible to match 0 bytes via glob or regexp patterns. Upon matching a pattern (or eof or full_buffer), any matching and previously unmatched output is saved in the variable expect_out(buffer). Up to 9 regexp substring matches are saved in the variables expect_out(1,string) through expect_out(9,string). If the -indices flag is used before a pattern, the starting and ending indices (in a form suitable for lrange) of the 10 strings are stored in the variables expect_out(X,start) and expect_out(X,end) where X is a digit, corresponds to the substring position in the buffer. 0 refers to strings which matched the entire pattern and is generated for glob patterns as well as regexp patterns. For example, if a process has produced output of "abcdefgh\n", the result of: expect "cd" is as if the following statements had executed: set expect_out(0,string) cd set expect_out(buffer) abcd and "efgh\n" is left in the output buffer. If a process produced the output "abbbcabkkkka\n", the result of: expect -indices -re "b(b*).*(k+)" is as if the following statements had executed: set expect_out(0,start) 1 set expect_out(0,end) 10 set expect_out(0,string) bbbcabkkkk set expect_out(1,start) 2 set expect_out(1,end) 3 set expect_out(1,string) bb set expect_out(2,start) 10 set expect_out(2,end) 10 set expect_out(2,string) k set expect_out(buffer) abbbcabkkkk and "a\n" is left in the output buffer. The pattern "*" (and -re ".*") will flush the output buffer without reading any more output from the process. Normally, the matched output is discarded from Expect's internal buffers. This may be prevented by prefixing a pattern with the -notransfer flag. This flag is especially useful in experimenting (and can be abbreviated to "-not" for convenience while experimenting). The spawn id associated with the matching output (or eof or full_buffer) is stored in expect_out(spawn_id). The -timeout flag causes the current expect command to use the following value as a timeout instead of using the value of the timeout variable. By default, patterns are matched against output from the current process, however the -i flag declares the output from the named spawn_id list be matched against any following patterns (up to the next -i). The spawn_id list should either be a whitespace separated list of spawn_ids or a variable referring to such a list of spawn_ids. For example, the following example waits for "connected" from the current process, or "busy", "failed" or "invalid password" from the spawn_id named by $proc2. expect { -i $proc2 busy {puts busy\n ; exp_continue} -re "failed|invalid password" abort timeout abort connected } The value of the global variable any_spawn_id may be used to match patterns to any spawn_ids that are named with all other -i flags in the current expect command. The spawn_id from a -i flag with no associated pattern (i.e., followed immediately by another -i) is made available to any other patterns in the same expect command associated with any_spawn_id. The -i flag may also name a global variable in which case the variable is read for a list of spawn ids. The variable is reread whenever it changes. This provides a way of changing the I/O source while the command is in execution. Spawn ids provided this way are called "indirect" spawn ids. Actions such as break and continue cause control structures (i.e., for, proc) to behave in the usual way. The command exp_continue allows expect itself to continue executing rather than returning as it normally would. This is useful for avoiding explicit loops or repeated expect statements. The following example is part of a fragment to automate rlogin. The exp_continue avoids having to write a second expect statement (to look for the prompt again) if the rlogin prompts for a password. expect { Password: { stty -echo send_user "password (for $user) on $host: " expect_user -re "(.*)\n" send_user "\n" send "$expect_out(1,string)\r" stty echo exp_continue } incorrect { send_user "invalid password or account\n" exit } timeout { send_user "connection to $host timed out\n" exit } eof { send_user \ "connection to host failed: $expect_out(buffer)" exit } -re $prompt } For example, the following fragment might help a user guide an interaction that is already totally automated. In this case, the terminal is put into raw mode. If the user presses "+", a variable is incremented. If "p" is pressed, several returns are sent to the process, perhaps to poke it in some way, and "i" lets the user interact with the process, effectively stealing away control from the script. In each case, the exp_continue allows the current expect to continue pattern matching after executing the current action. stty raw -echo expect_after { -i $user_spawn_id "p" {send "\r\r\r"; exp_continue} "+" {incr foo; exp_continue} "i" {interact; exp_continue} "quit" exit } By default, exp_continue resets the timeout timer. The timer is not restarted, if exp_continue is called with the -continue_timer flag. expect_after [expect_args] works identically to the expect_before except that if patterns from both expect and expect_after can match, the expect pattern is used. See the expect_before command for more information. expect_background [expect_args] takes the same arguments as expect, however it returns immediately. Patterns are tested whenever new input arrives. The pattern timeout and default are meaningless to expect_background and are silently discarded. Otherwise, the expect_background command uses expect_before and expect_after patterns just like expect does. When expect_background actions are being evaluated, background processing for the same spawn id is blocked. Background processing is unblocked when the action completes. While background processing is blocked, it is possible to do a (foreground) expect on the same spawn id. It is not possible to execute an expect while an expect_background is unblocked. expect_background for a particular spawn id is deleted by declaring a new expect_background with the same spawn id. Declaring expect_background with no pattern removes the given spawn id from the ability to match patterns in the background. expect_before [expect_args] takes the same arguments as expect, however it returns immediately. Pattern-action pairs from the most recent expect_before with the same spawn id are implicitly added to any following expect commands. If a pattern matches, it is treated as if it had been specified in the expect command itself, and the associated body is executed in the context of the expect command. If patterns from both expect_before and expect can match, the expect_before pattern is used. If no pattern is specified, the spawn id is not checked for any patterns. Unless overridden by a -i flag, expect_before patterns match against the spawn id defined at the time that the expect_before command was executed (not when its pattern is matched). The -info flag causes expect_before to return the current specifications of what patterns it will match. By default, it reports on the current spawn id. An optional spawn id specification may be given for information on that spawn id. For example expect_before -info -i $proc At most one spawn id specification may be given. The flag -indirect suppresses direct spawn ids that come only from indirect specifications. Instead of a spawn id specification, the flag "-all" will cause "-info" to report on all spawn ids. The output of the -info flag can be reused as the argument to expect_before. expect_tty [expect_args] is like expect but it reads characters from /dev/tty (i.e. keystrokes from the user). By default, reading is performed in cooked mode. Thus, lines must end with a return in order for expect to see them. This may be changed via stty (see the stty command below). expect_user [expect_args] is like expect but it reads characters from stdin (i.e. keystrokes from the user). By default, reading is performed in cooked mode. Thus, lines must end with a return in order for expect to see them. This may be changed via stty (see the stty command below). fork creates a new process. The new process is an exact copy of the current Expect process. On success, fork returns 0 to the new (child) process and returns the process ID of the child process to the parent process. On failure (invariably due to lack of resources, e.g., swap space, memory), fork returns -1 to the parent process, and no child process is created. Forked processes exit via the exit command, just like the original process. Forked processes are allowed to write to the log files. If you do not disable debugging or logging in most of the processes, the result can be confusing. Some pty implementations may be confused by multiple readers and writers, even momentarily. Thus, it is safest to fork before spawning processes. interact [string1 body1] ... [stringn [bodyn]] gives control of the current process to the user, so that keystrokes are sent to the current process, and the stdout and stderr of the current process are returned. String-body pairs may be specified as arguments, in which case the body is executed when the corresponding string is entered. (By default, the string is not sent to the current process.) The interpreter command is assumed, if the final body is missing. If the arguments to the entire interact statement require more than one line, all the arguments may be "braced" into one so as to avoid terminating each line with a backslash. In this one case, the usual Tcl substitutions will occur despite the braces. For example, the following command runs interact with the following string-body pairs defined: When ^Z is pressed, Expect is suspended. (The -reset flag restores the terminal modes.) When ^A is pressed, the user sees "you typed a control-A" and the process is sent a ^A. When $ is pressed, the user sees the date. When ^C is pressed, Expect exits. If "foo" is entered, the user sees "bar". When ~~ is pressed, the Expect interpreter runs interactively. set CTRLZ \032 interact { -reset $CTRLZ {exec kill -STOP [pid]} \001 {send_user "you typed a control-A\n"; send "\001" } $ {send_user "The date is [clock format [clock seconds]]."} \003 exit foo {send_user "bar"} ~~ } In string-body pairs, strings are matched in the order they are listed as arguments. Strings that partially match are not sent to the current process in anticipation of the remainder coming. If characters are then entered such that there can no longer possibly be a match, only the part of the string will be sent to the process that cannot possibly begin another match. Thus, strings that are substrings of partial matches can match later, if the original strings that was attempting to be match ultimately fails. By default, string matching is exact with no wild cards. (In contrast, the expect command uses glob-style patterns by default.) The -ex flag may be used to protect patterns that might otherwise match interact flags from doing so. Any pattern beginning with a "-" should be protected this way. (All strings starting with "-" are reserved for future options.) The -re flag forces the string to be interpreted as a regexp-style pattern. In this case, matching substrings are stored in the variable interact_out similarly to the way expect stores its output in the variable expect_out. The -indices flag is similarly supported. The pattern eof introduces an action that is executed upon end-of-file. A separate eof pattern may also follow the -output flag in which case it is matched if an eof is detected while writing output. The default eof action is "return", so that interact simply returns upon any EOF. The pattern timeout introduces a timeout (in seconds) and action that is executed after no characters have been read for a given time. The timeout pattern applies to the most recently specified process. There is no default timeout. The special variable "timeout" (used by the expect command) has no affect on this timeout. For example, the following statement could be used to autologout users who have not typed anything for an hour but who still get frequent system messages: interact -input $user_spawn_id timeout 3600 return -output \ $spawn_id If the pattern is the keyword null, and nulls are allowed (via the remove_nulls command), the corresponding body is executed if a single ASCII 0 is matched. It is not possible to match 0 bytes via glob or regexp patterns. Prefacing a pattern with the flag -iwrite causes the variable interact_out(spawn_id) to be set to the spawn_id which matched the pattern (or eof). Actions such as break and continue cause control structures (i.e., for, proc) to behave in the usual way. However return causes interact to return to its caller, while inter_return causes interact to cause a return in its caller. For example, if "proc foo" called interact which then executed the action inter_return, proc foo would return. (This means that if interact calls interpreter interactively typing return will cause the interact to continue, while inter_return will cause the interact to return to its caller.) During interact, raw mode is used so that all characters may be passed to the current process. If the current process does not catch job control signals, it will stop if sent a stop signal (by default ^Z). To restart it, send a continue signal (such as by "kill -CONT <pid>"). If you really want to send a SIGSTOP to such a process (by ^Z), consider spawning csh first and then running your program. On the other hand, if you want to send a SIGSTOP to Expect itself, first call interpreter (perhaps by using an escape character), and then press ^Z. String-body pairs can be used as a shorthand for avoiding having to enter the interpreter and execute commands interactively. The previous terminal mode is used while the body of a string-body pair is being executed. For speed, actions execute in raw mode by default. The -reset flag resets the terminal to the mode it had before interact was executed (invariably, cooked mode). Note that characters entered when the mode is being switched may be lost (an unfortunate feature of the terminal driver on some systems). The only reason to use -reset is if your action depends on running in cooked mode. The -echo flag sends characters that match the following pattern back to the process that generated them as each character is read. This may be useful when the user needs to see feedback from partially typed patterns. If a pattern is being echoed but eventually fails to match, the characters are sent to the spawned process. If the spawned process then echoes them, the user will see the characters twice. -echo is probably only appropriate in situations where the user is unlikely to not complete the pattern. For example, the following excerpt is from rftp, the recursive-ftp script, where the user is prompted to enter ~g, ~p, or ~l, to get, put, or list the current directory recursively. These are so far away from the normal ftp commands, that the user is unlikely to type ~ followed by anything else, except mistakenly, in which case, they'll probably just ignore the result anyway. interact { -echo ~g {getcurdirectory 1} -echo ~l {getcurdirectory 0} -echo ~p {putcurdirectory} } The -nobuffer flag sends characters that match the following pattern on to the output process as characters are read. This is useful when you wish to let a program echo back the pattern. For example, the following might be used to monitor where a person is dialing (a Hayes-style modem). Each time "atd" is seen the script logs the rest of the line. proc lognumber {} { interact -nobuffer -re "(.*)\r" return puts $log "[clock format [clock seconds]]: dialed $interact_out(1,string)" } interact -nobuffer "atd" lognumber During interact, previous use of log_user is ignored. In particular, interact will force its output to be logged (sent to the standard output) since it is presumed the user doesn't wish to interact blindly. The -o flag causes any following key-body pairs to be applied to the output of the current process. This can be useful, for example, when dealing with hosts that send unwanted characters during a telnet session. By default, interact expects the user to be writing stdin and reading stdout of the Expect process itself. The -u flag (for "user") makes interact look for the user as the process named by its argument (which must be a spawned id). This allows two unrelated processes to be joined together without using an explicit loop. To aid in debugging, Expect diagnostics always go to stderr (or stdout for certain logging and debugging information). For the same reason, the interpreter command will read interactively from stdin. For example, the following fragment creates a login process. Then it dials the user (not shown), and finally connects the two together. Of course, any process may be substituted for login. A shell, for example, would allow the user to work without supplying an account and password. spawn login set login $spawn_id spawn tip modem # dial back out to user # connect user to login interact -u $login To send output to multiple processes, list each spawn id list prefaced by a -output flag. Input for a group of output spawn ids may be determined by a spawn id list prefaced by a -input flag. (Both -input and -output may take lists in the same form as the -i flag in the expect command, except that any_spawn_id is not meaningful in interact.) All following flags and strings (or patterns) apply to this input until another -input flag appears. If no -input appears, -output implies "-input $user_spawn_id -output". (Similarly, with patterns that do not have -input.) If one -input is specified, it overrides $user_spawn_id. If a second -input is specified, it overrides $spawn_id. Additional -input flags may be specified. The two implied input processes default to having their outputs specified as $spawn_id and $user_spawn_id (in reverse). If a -input flag appears with no -output flag, characters from that process are discarded. The -i flag introduces a replacement for the current spawn_id when no other -input or -output flags are used. A -i flag implies a -o flag. It is possible to change the processes that are being interacted with by using indirect spawn ids. (Indirect spawn ids are described in the section on the expect command.) Indirect spawn ids may be specified with the -i, -u, -input, or -output flags. interpreter [args] causes the user to be interactively prompted for Expect and Tcl commands. The result of each command is printed. Actions such as break and continue cause control structures (i.e., for, proc) to behave in the usual way. However return causes interpreter to return to its caller, while inter_return causes interpreter to cause a return in its caller. For example, if "proc foo" called interpreter which then executed the action inter_return, proc foo would return. Any other command causes interpreter to continue prompting for new commands. By default, the prompt contains two integers. The first integer describes the depth of the evaluation stack (i.e., how many times Tcl_Eval has been called). The second integer is the Tcl history identifier. The prompt can be set by defining a procedure called "prompt1" whose return value becomes the next prompt. If a statement has open quotes, parens, braces, or brackets, a secondary prompt (by default "+> ") is issued upon newline. The secondary prompt may be set by defining a procedure called "prompt2". During interpreter, cooked mode is used, even if the its caller was using raw mode. If stdin is closed, interpreter will return unless the -eof flag is used, in which case the subsequent argument is invoked. log_file [args] [[-a] file] If a filename is provided, log_file will record a transcript of the session (beginning at that point) in the file. log_file will stop recording if no argument is given. Any previous log file is closed. Instead of a filename, a Tcl file identifier may be provided by using the -open or -leaveopen flags. This is similar to the spawn command. (See spawn for more info.) The -a flag forces output to be logged that was suppressed by the log_user command. By default, the log_file command appends to old files rather than truncating them, for the convenience of being able to turn logging off and on multiple times in one session. To truncate files, use the -noappend flag. The -info flag causes log_file to return a description of the most recent non-info arguments given. log_user -info|0|1 By default, the send/expect dialogue is logged to stdout (and a logfile if open). The logging to stdout is disabled by the command "log_user 0" and reenabled by "log_user 1". Logging to the logfile is unchanged. The -info flag causes log_user to return a description of the most recent non-info arguments given. match_max [-d] [-i spawn_id] [size] defines the size of the buffer (in bytes) used internally by expect. With no size argument, the current size is returned. With the -d flag, the default size is set. (The initial default is 2000.) With the -i flag, the size is set for the named spawn id, otherwise it is set for the current process. overlay [-# spawn_id] [-# spawn_id] [...] program [args] executes program args in place of the current Expect program, which terminates. A bare hyphen argument forces a hyphen in front of the command name as if it was a login shell. All spawn_ids are closed except for those named as arguments. These are mapped onto the named file identifiers. Spawn_ids are mapped to file identifiers for the new program to inherit. For example, the following line runs chess and allows it to be controlled by the current process - say, a chess master. overlay -0 $spawn_id -1 $spawn_id -2 $spawn_id chess This is more efficient than "interact -u", however, it sacrifices the ability to do programmed interaction since the Expect process is no longer in control. Note that no controlling terminal is provided. Thus, if you disconnect or remap standard input, programs that do job control (shells, login, etc) will not function properly. parity [-d] [-i spawn_id] [value] defines whether parity should be retained or stripped from the output of spawned processes. If value is zero, parity is stripped, otherwise it is not stripped. With no value argument, the current value is returned. With the -d flag, the default parity value is set. (The initial default is 1, i.e., parity is not stripped.) With the -i flag, the parity value is set for the named spawn id, otherwise it is set for the current process. remove_nulls [-d] [-i spawn_id] [value] defines whether nulls are retained or removed from the output of spawned processes before pattern matching or storing in the variable expect_out or interact_out. If value is 1, nulls are removed. If value is 0, nulls are not removed. With no value argument, the current value is returned. With the -d flag, the default value is set. (The initial default is 1, i.e., nulls are removed.) With the -i flag, the value is set for the named spawn id, otherwise it is set for the current process. Whether or not nulls are removed, Expect will record null bytes to the log and stdout. send [-flags] string Sends string to the current process. For example, the command send "hello world\r" sends the characters, h e l l o <blank> w o r l d <return> to the current process. (Tcl includes a printf-like command (called format) which can build arbitrarily complex strings.) Characters are sent immediately although programs with line-buffered input will not read the characters until a return character is sent. A return character is denoted "\r". The -- flag forces the next argument to be interpreted as a string rather than a flag. Any string can be preceded by "--" whether or not it actually looks like a flag. This provides a reliable mechanism to specify variable strings without being tripped up by those that accidentally look like flags. (All strings starting with "-" are reserved for future options.) The -i flag declares that the string be sent to the named spawn_id. If the spawn_id is user_spawn_id, and the terminal is in raw mode, newlines in the string are translated to return-newline sequences so that they appear as if the terminal was in cooked mode. The -raw flag disables this translation. The -null flag sends null characters (0 bytes). By default, one null is sent. An integer may follow the -null to indicate how many nulls to send. The -break flag generates a break condition. This only makes sense if the spawn id refers to a tty device opened via "spawn -open". If you have spawned a process such as tip, you should use tip's convention for generating a break. The -s flag forces output to be sent "slowly", thus avoid the common situation where a computer outtypes an input buffer that was designed for a human who would never outtype the same buffer. This output is controlled by the value of the variable "send_slow" which takes a two element list. The first element is an integer that describes the number of bytes to send atomically. The second element is a real number that describes the number of seconds by which the atomic sends must be separated. For example, "set send_slow {10 .001}" would force "send -s" to send strings with 1 millisecond in between each 10 characters sent. The -h flag forces output to be sent (somewhat) like a human actually typing. Human-like delays appear between the characters. (The algorithm is based upon a Weibull distribution, with modifications to suit this particular application.) This output is controlled by the value of the variable "send_human" which takes a five element list. The first two elements are average interarrival time of characters in seconds. The first is used by default. The second is used at word endings, to simulate the subtle pauses that occasionally occur at such transitions. The third parameter is a measure of variability where .1 is quite variable, 1 is reasonably variable, and 10 is quite invariable. The extremes are 0 to infinity. The last two parameters are, respectively, a minimum and maximum interarrival time. The minimum and maximum are used last and "clip" the final time. The ultimate average can be quite different from the given average if the minimum and maximum clip enough values. As an example, the following command emulates a fast and consistent typist: set send_human {.1 .3 1 .05 2} send -h "I'm hungry. Let's do lunch." while the following might be more suitable after a hangover: set send_human {.4 .4 .2 .5 100} send -h "Goodd party lash night!" Note that errors are not simulated, although you can set up error correction situations yourself by embedding mistakes and corrections in a send argument. The flags for sending null characters, for sending breaks, for forcing slow output and for human-style output are mutually exclusive. Only the one specified last will be used. Furthermore, no string argument can be specified with the flags for sending null characters or breaks. It is a good idea to precede the first send to a process by an expect. expect will wait for the process to start, while send cannot. In particular, if the first send completes before the process starts running, you run the risk of having your data ignored. In situations where interactive programs offer no initial prompt, you can precede send by a delay as in: # To avoid giving hackers hints on how to break in, # this system does not prompt for an external password. # Wait for 5 seconds for exec to complete spawn telnet very.secure.gov sleep 5 send password\r exp_send is an alias for send. If you are using Expectk or some other variant of Expect in the Tk environment, send is defined by Tk for an entirely different purpose. exp_send is provided for compatibility between environments. Similar aliases are provided for other Expect's other send commands. send_error [-flags] string is like send, except that the output is sent to stderr rather than the current process. send_log [--] string is like send, except that the string is only sent to the log file (see log_file.) The arguments are ignored if no log file is open. send_tty [-flags] string is like send, except that the output is sent to /dev/tty rather than the current process. send_user [-flags] string is like send, except that the output is sent to stdout rather than the current process. sleep seconds causes the script to sleep for the given number of seconds. Seconds may be a decimal number. Interrupts (and Tk events if you are using Expectk) are processed while Expect sleeps. spawn [args] program [args] creates a new process running program args. Its stdin, stdout and stderr are connected to Expect, so that they may be read and written by other Expect commands. The connection is broken by close or if the process itself closes any of the file identifiers. When a process is started by spawn, the variable spawn_id is set to a descriptor referring to that process. The process described by spawn_id is considered the current process. spawn_id may be read or written, in effect providing job control. user_spawn_id is a global variable containing a descriptor which refers to the user. For example, when spawn_id is set to this value, expect behaves like expect_user. error_spawn_id is a global variable containing a descriptor which refers to the standard error. For example, when spawn_id is set to this value, send behaves like send_error. tty_spawn_id is a global variable containing a descriptor which refers to /dev/tty. If /dev/tty does not exist (such as in a cron, at, or batch script), then tty_spawn_id is not defined. This may be tested as: if {[info vars tty_spawn_id]} { # /dev/tty exists } else { # /dev/tty doesn't exist # probably in cron, batch, or at script } spawn returns the UNIX process id. If no process is spawned, 0 is returned. The variable spawn_out(slave,name) is set to the name of the pty slave device. By default, spawn echoes the command name and arguments. The -noecho flag stops spawn from doing this. The -console flag causes console output to be redirected to the spawned process. This is not supported on all systems. Internally, spawn uses a pty, initialized the same way as the user's tty. This is further initialized so that all settings are "sane" (according to stty(1)). If the variable stty_init is defined, it is interpreted in the style of stty arguments as further configuration. For example, "set stty_init raw" will cause further spawned processes's terminals to start in raw mode. -nottycopy skips the initialization based on the user's tty. -nottyinit skips the "sane" initialization. Normally, spawn takes little time to execute. If you notice spawn taking a significant amount of time, it is probably encountering ptys that are wedged. A number of tests are run on ptys to avoid entanglements with errant processes. (These take 10 seconds per wedged pty.) Running Expect with the -d option will show if Expect is encountering many ptys in odd states. If you cannot kill the processes to which these ptys are attached, your only recourse may be to reboot. If program cannot be spawned successfully because exec(2) fails (e.g. when program doesn't exist), an error message will be returned by the next interact or expect command as if program had run and produced the error message as output. This behavior is a natural consequence of the implementation of spawn. Internally, spawn forks, after which the spawned process has no way to communicate with the original Expect process except by communication via the spawn_id. The -open flag causes the next argument to be interpreted as a Tcl file identifier (i.e., returned by open.) The spawn id can then be used as if it were a spawned process. (The file identifier should no longer be used.) This lets you treat raw devices, files, and pipelines as spawned processes without using a pty. 0 is returned to indicate there is no associated process. When the connection to the spawned process is closed, so is the Tcl file identifier. The -leaveopen flag is similar to -open except that -leaveopen causes the file identifier to be left open even after the spawn id is closed. The -pty flag causes a pty to be opened but no process spawned. 0 is returned to indicate there is no associated process. Spawn_id is set as usual. The variable spawn_out(slave,fd) is set to a file identifier corresponding to the pty slave. It can be closed using "close -slave". The -ignore flag names a signal to be ignored in the spawned process. Otherwise, signals get the default behavior. Signals are named as in the trap command, except that each signal requires a separate flag. strace level causes following statements to be printed before being executed. (Tcl's trace command traces variables.) level indicates how far down in the call stack to trace. For example, the following command runs Expect while tracing the first 4 levels of calls, but none below that. expect -c "strace 4" script.exp The -info flag causes strace to return a description of the most recent non-info arguments given. stty args changes terminal modes similarly to the external stty command. By default, the controlling terminal is accessed. Other terminals can be accessed by appending "< /dev/tty..." to the command. (Note that the arguments should not be grouped into a single argument.) Requests for status return it as the result of the command. If no status is requested and the controlling terminal is accessed, the previous status of the raw and echo attributes are returned in a form which can later be used by the command. For example, the arguments raw or -cooked put the terminal into raw mode. The arguments -raw or cooked put the terminal into cooked mode. The arguments echo and -echo put the terminal into echo and noecho mode respectively. The following example illustrates how to temporarily disable echoing. This could be used in otherwise-automatic scripts to avoid embedding passwords in them. (See more discussion on this under EXPECT HINTS below.) stty -echo send_user "Password: " expect_user -re "(.*)\n" set password $expect_out(1,string) stty echo system args gives args to sh(1) as input, just as if it had been typed as a command from a terminal. Expect waits until the shell terminates. The return status from sh is handled the same way that exec handles its return status. In contrast to exec which redirects stdin and stdout to the script, system performs no redirection (other than that indicated by the string itself). Thus, it is possible to use programs which must talk directly to /dev/tty. For the same reason, the results of system are not recorded in the log. timestamp [args] returns a timestamp. With no arguments, the number of seconds since the epoch is returned. The -format flag introduces a string which is returned but with substitutions made according to the POSIX rules for strftime. For example %a is replaced by an abbreviated weekday name (i.e., Sat). Others are: %a abbreviated weekday name %A full weekday name %b abbreviated month name %B full month name %c date-time as in: Wed Oct 6 11:45:56 1993 %d day of the month (01-31) %H hour (00-23) %I hour (01-12) %j day (001-366) %m month (01-12) %M minute (00-59) %p am or pm %S second (00-61) %u day (1-7, Monday is first day of week) %U week (00-53, first Sunday is first day of week one) %V week (01-53, ISO 8601 style) %w day (0-6) %W week (00-53, first Monday is first day of week one) %x date-time as in: Wed Oct 6 1993 %X time as in: 23:59:59 %y year (00-99) %Y year as in: 1993 %Z timezone (or nothing if not determinable) %% a bare percent sign Other % specifications are undefined. Other characters will be passed through untouched. Only the C locale is supported. The -seconds flag introduces a number of seconds since the epoch to be used as a source from which to format. Otherwise, the current time is used. The -gmt flag forces timestamp output to use the GMT timezone. With no flag, the local timezone is used. trap [[command] signals] causes the given command to be executed upon future receipt of any of the given signals. The command is executed in the global scope. If command is absent, the signal action is returned. If command is the string SIG_IGN, the signals are ignored. If command is the string SIG_DFL, the signals are result to the system default. signals is either a single signal or a list of signals. Signals may be specified numerically or symbolically as per signal(3). The "SIG" prefix may be omitted. With no arguments (or the argument -number), trap returns the signal number of the trap command currently being executed. The -code flag uses the return code of the command in place of whatever code Tcl was about to return when the command originally started running. The -interp flag causes the command to be evaluated using the interpreter active at the time the command started running rather than when the trap was declared. The -name flag causes the trap command to return the signal name of the trap command currently being executed. The -max flag causes the trap command to return the largest signal number that can be set. For example, the command "trap {send_user "Ouch!"} SIGINT" will print "Ouch!" each time the user presses ^C. By default, SIGINT (which can usually be generated by pressing ^C) and SIGTERM cause Expect to exit. This is due to the following trap, created by default when Expect starts. trap exit {SIGINT SIGTERM} If you use the -D flag to start the debugger, SIGINT is redefined to start the interactive debugger. This is due to the following trap: trap {exp_debug 1} SIGINT The debugger trap can be changed by setting the environment variable EXPECT_DEBUG_INIT to a new trap command. You can, of course, override both of these just by adding trap commands to your script. In particular, if you have your own "trap exit SIGINT", this will override the debugger trap. This is useful if you want to prevent users from getting to the debugger at all. If you want to define your own trap on SIGINT but still trap to the debugger when it is running, use: if {![exp_debug]} {trap mystuff SIGINT} Alternatively, you can trap to the debugger using some other signal. trap will not let you override the action for SIGALRM as this is used internally to Expect. The disconnect command sets SIGALRM to SIG_IGN (ignore). You can reenable this as long as you disable it during subsequent spawn commands. See signal(3) for more info. wait [args] delays until a spawned process (or the current process if none is named) terminates. wait normally returns a list of four integers. The first integer is the pid of the process that was waited upon. The second integer is the corresponding spawn id. The third integer is -1 if an operating system error occurred, or 0 otherwise. If the third integer was 0, the fourth integer is the status returned by the spawned process. If the third integer was -1, the fourth integer is the value of errno set by the operating system. The global variable errorCode is also set. Additional elements may appear at the end of the return value from wait. An optional fifth element identifies a class of information. Currently, the only possible value for this element is CHILDKILLED in which case the next two values are the C-style signal name and a short textual description. The -i flag declares the process to wait corresponding to the named spawn_id (NOT the process id). Inside a SIGCHLD handler, it is possible to wait for any spawned process by using the spawn id -1. The -nowait flag causes the wait to return immediately with the indication of a successful wait. When the process exits (later), it will automatically disappear without the need for an explicit wait. The wait command may also be used wait for a forked process using the arguments "-i -1". Unlike its use with spawned processes, this command can be executed at any time. There is no control over which process is reaped. However, the return value can be checked for the process id. LIBRARIES top Expect automatically knows about two built-in libraries for Expect scripts. These are defined by the directories named in the variables exp_library and exp_exec_library. Both are meant to contain utility files that can be used by other scripts. exp_library contains architecture-independent files. exp_exec_library contains architecture-dependent files. Depending on your system, both directories may be totally empty. The existence of the file $exp_exec_library/cat-buffers describes whether your /bin/cat buffers by default. PRETTY-PRINTING top A vgrind definition is available for pretty-printing Expect scripts. Assuming the vgrind definition supplied with the Expect distribution is correctly installed, you can use it as: vgrind -lexpect file EXAMPLES top It many not be apparent how to put everything together that the man page describes. I encourage you to read and try out the examples in the example directory of the Expect distribution. Some of them are real programs. Others are simply illustrative of certain techniques, and of course, a couple are just quick hacks. The INSTALL file has a quick overview of these programs. The Expect papers (see SEE ALSO) are also useful. While some papers use syntax corresponding to earlier versions of Expect, the accompanying rationales are still valid and go into a lot more detail than this man page. CAVEATS top Extensions may collide with Expect's command names. For example, send is defined by Tk for an entirely different purpose. For this reason, most of the Expect commands are also available as "exp_XXXX". Commands and variables beginning with "exp", "inter", "spawn", and "timeout" do not have aliases. Use the extended command names if you need this compatibility between environments. Expect takes a rather liberal view of scoping. In particular, variables read by commands specific to the Expect program will be sought first from the local scope, and if not found, in the global scope. For example, this obviates the need to place "global timeout" in every procedure you write that uses expect. On the other hand, variables written are always in the local scope (unless a "global" command has been issued). The most common problem this causes is when spawn is executed in a procedure. Outside the procedure, spawn_id no longer exists, so the spawned process is no longer accessible simply because of scoping. Add a "global spawn_id" to such a procedure. If you cannot enable the multispawning capability (i.e., your system supports neither select (BSD *.*), poll (SVR>2), nor something equivalent), Expect will only be able to control a single process at a time. In this case, do not attempt to set spawn_id, nor should you execute processes via exec while a spawned process is running. Furthermore, you will not be able to expect from multiple processes (including the user as one) at the same time. Terminal parameters can have a big effect on scripts. For example, if a script is written to look for echoing, it will misbehave if echoing is turned off. For this reason, Expect forces sane terminal parameters by default. Unfortunately, this can make things unpleasant for other programs. As an example, the emacs shell wants to change the "usual" mappings: newlines get mapped to newlines instead of carriage-return newlines, and echoing is disabled. This allows one to use emacs to edit the input line. Unfortunately, Expect cannot possibly guess this. You can request that Expect not override its default setting of terminal parameters, but you must then be very careful when writing scripts for such environments. In the case of emacs, avoid depending upon things like echoing and end-of-line mappings. The commands that accepted arguments braced into a single list (the expect variants and interact) use a heuristic to decide if the list is actually one argument or many. The heuristic can fail only in the case when the list actually does represent a single argument which has multiple embedded \n's with non- whitespace characters between them. This seems sufficiently improbable, however the argument "-nobrace" can be used to force a single argument to be handled as a single argument. This could conceivably be used with machine-generated Expect code. Similarly, -brace forces a single argument to be handle as multiple patterns/actions. BUGS top It was really tempting to name the program "sex" (for either "Smart EXec" or "Send-EXpect"), but good sense (or perhaps just Puritanism) prevailed. On some systems, when a shell is spawned, it complains about not being able to access the tty but runs anyway. This means your system has a mechanism for gaining the controlling tty that Expect doesn't know about. Please find out what it is, and send this information back to me. Ultrix 4.1 (at least the latest versions around here) considers timeouts of above 1000000 to be equivalent to 0. Digital UNIX 4.0A (and probably other versions) refuses to allocate ptys if you define a SIGCHLD handler. See grantpt page for more info. IRIX 6.0 does not handle pty permissions correctly so that if Expect attempts to allocate a pty previously used by someone else, it fails. Upgrade to IRIX 6.1. Telnet (verified only under SunOS 4.1.2) hangs if TERM is not set. This is a problem under cron, at and in cgi scripts, which do not define TERM. Thus, you must set it explicitly - to what type is usually irrelevant. It just has to be set to something! The following probably suffices for most cases. set env(TERM) vt100 Tip (verified only under BSDI BSD/OS 3.1 i386) hangs if SHELL and HOME are not set. This is a problem under cron, at and in cgi scripts, which do not define these environment variables. Thus, you must set them explicitly - to what type is usually irrelevant. It just has to be set to something! The following probably suffices for most cases. set env(SHELL) /bin/sh set env(HOME) /usr/local/bin Some implementations of ptys are designed so that the kernel throws away any unread output after 10 to 15 seconds (actual number is implementation-dependent) after the process has closed the file descriptor. Thus Expect programs such as spawn date sleep 20 expect will fail. To avoid this, invoke non-interactive programs with exec rather than spawn. While such situations are conceivable, in practice I have never encountered a situation in which the final output of a truly interactive program would be lost due to this behavior. On the other hand, Cray UNICOS ptys throw away any unread output immediately after the process has closed the file descriptor. I have reported this to Cray and they are working on a fix. Sometimes a delay is required between a prompt and a response, such as when a tty interface is changing UART settings or matching baud rates by looking for start/stop bits. Usually, all this is require is to sleep for a second or two. A more robust technique is to retry until the hardware is ready to receive input. The following example uses both strategies: send "speed 9600\r"; sleep 1 expect { timeout {send "\r"; exp_continue} $prompt } trap -code will not work with any command that sits in Tcl's event loop, such as sleep. The problem is that in the event loop, Tcl discards the return codes from async event handlers. A workaround is to set a flag in the trap code. Then check the flag immediately after the command (i.e., sleep). The expect_background command ignores -timeout arguments and has no concept of timeouts in general. EXPECT HINTS top There are a couple of things about Expect that may be non- intuitive. This section attempts to address some of these things with a couple of suggestions. A common expect problem is how to recognize shell prompts. Since these are customized differently by differently people and different shells, portably automating rlogin can be difficult without knowing the prompt. A reasonable convention is to have users store a regular expression describing their prompt (in particular, the end of it) in the environment variable EXPECT_PROMPT. Code like the following can be used. If EXPECT_PROMPT doesn't exist, the code still has a good chance of functioning correctly. set prompt "(%|#|\\$) $" ;# default prompt catch {set prompt $env(EXPECT_PROMPT)} expect -re $prompt I encourage you to write expect patterns that include the end of whatever you expect to see. This avoids the possibility of answering a question before seeing the entire thing. In addition, while you may well be able to answer questions before seeing them entirely, if you answer early, your answer may appear echoed back in the middle of the question. In other words, the resulting dialogue will be correct but look scrambled. Most prompts include a space character at the end. For example, the prompt from ftp is 'f', 't', 'p', '>' and <blank>. To match this prompt, you must account for each of these characters. It is a common mistake not to include the blank. Put the blank in explicitly. If you use a pattern of the form X*, the * will match all the output received from the end of X to the last thing received. This sounds intuitive but can be somewhat confusing because the phrase "last thing received" can vary depending upon the speed of the computer and the processing of I/O both by the kernel and the device driver. In particular, humans tend to see program output arriving in huge chunks (atomically) when in reality most programs produce output one line at a time. Assuming this is the case, the * in the pattern of the previous paragraph may only match the end of the current line even though there seems to be more, because at the time of the match that was all the output that had been received. expect has no way of knowing that further output is coming unless your pattern specifically accounts for it. Even depending on line-oriented buffering is unwise. Not only do programs rarely make promises about the type of buffering they do, but system indigestion can break output lines up so that lines break at seemingly random places. Thus, if you can express the last few characters of a prompt when writing patterns, it is wise to do so. If you are waiting for a pattern in the last output of a program and the program emits something else instead, you will not be able to detect that with the timeout keyword. The reason is that expect will not timeout - instead it will get an eof indication. Use that instead. Even better, use both. That way if that line is ever moved around, you won't have to edit the line itself. Newlines are usually converted to carriage return, linefeed sequences when output by the terminal driver. Thus, if you want a pattern that explicitly matches the two lines, from, say, printf("foo\nbar"), you should use the pattern "foo\r\nbar". A similar translation occurs when reading from the user, via expect_user. In this case, when you press return, it will be translated to a newline. If Expect then passes that to a program which sets its terminal to raw mode (like telnet), there is going to be a problem, as the program expects a true return. (Some programs are actually forgiving in that they will automatically translate newlines to returns, but most don't.) Unfortunately, there is no way to find out that a program put its terminal into raw mode. Rather than manually replacing newlines with returns, the solution is to use the command "stty raw", which will stop the translation. Note, however, that this means that you will no longer get the cooked line-editing features. interact implicitly sets your terminal to raw mode so this problem will not arise then. It is often useful to store passwords (or other private information) in Expect scripts. This is not recommended since anything that is stored on a computer is susceptible to being accessed by anyone. Thus, interactively prompting for passwords from a script is a smarter idea than embedding them literally. Nonetheless, sometimes such embedding is the only possibility. Unfortunately, the UNIX file system has no direct way of creating scripts which are executable but unreadable. Systems which support setgid shell scripts may indirectly simulate this as follows: Create the Expect script (that contains the secret data) as usual. Make its permissions be 750 (-rwxr-x---) and owned by a trusted group, i.e., a group which is allowed to read it. If necessary, create a new group for this purpose. Next, create a /bin/sh script with permissions 2751 (-rwxr-s--x) owned by the same group as before. The result is a script which may be executed (and read) by anyone. When invoked, it runs the Expect script. SEE ALSO top Tcl(3), libexpect(3) "Exploring Expect: A Tcl-Based Toolkit for Automating Interactive Programs" by Don Libes, pp. 602, ISBN 1-56592-090-2, O'Reilly and Associates, 1995. "expect: Curing Those Uncontrollable Fits of Interactivity" by Don Libes, Proceedings of the Summer 1990 USENIX Conference, Anaheim, California, June 11-15, 1990. "Using expect to Automate System Administration Tasks" by Don Libes, Proceedings of the 1990 USENIX Large Installation Systems Administration Conference, Colorado Springs, Colorado, October 17-19, 1990. "Tcl: An Embeddable Command Language" by John Ousterhout, Proceedings of the Winter 1990 USENIX Conference, Washington, D.C., January 22-26, 1990. "expect: Scripts for Controlling Interactive Programs" by Don Libes, Computing Systems, Vol. 4, No. 2, University of California Press Journals, November 1991. "Regression Testing and Conformance Testing Interactive Programs", by Don Libes, Proceedings of the Summer 1992 USENIX Conference, pp. 135-144, San Antonio, TX, June 12-15, 1992. "Kibitz - Connecting Multiple Interactive Programs Together", by Don Libes, Software - Practice & Experience, John Wiley & Sons, West Sussex, England, Vol. 23, No. 5, May, 1993. "A Debugger for Tcl Applications", by Don Libes, Proceedings of the 1993 Tcl/Tk Workshop, Berkeley, CA, June 10-11, 1993. AUTHOR top Don Libes, National Institute of Standards and Technology ACKNOWLEDGMENTS top Thanks to John Ousterhout for Tcl, and Scott Paisley for inspiration. Thanks to Rob Savoye for Expect's autoconfiguration code. The HISTORY file documents much of the evolution of expect. It makes interesting reading and might give you further insight to this software. Thanks to the people mentioned in it who sent me bug fixes and gave other assistance. Design and implementation of Expect was paid for in part by the U.S. government and is therefore in the public domain. However the author and NIST would like credit if this program and documentation or portions of them are used. COLOPHON top This page is part of the expect (programmed dialogue with interactive programs) project. Information about the project can be found at https://core.tcl.tk/expect/index. If you have a bug report for this manual page, see https://sourceforge.net/p/expect/bugs/. This page was obtained from the tarball expect5.45.3.tar.gz fetched from http://sourceforge.net/projects/expect/files/Expect/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 29 December 1994 EXPECT(1) Pages that refer to this page: libexpect(3), pty(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# expect\n\n> Script executor that interacts with other programs that require user input.\n> More information: <https://manned.org/expect>.\n\n- Execute an expect script from a file:\n\n`expect {{path/to/file}}`\n\n- Execute a specified expect script:\n\n`expect -c "{{commands}}"`\n\n- Enter an interactive REPL (use `exit` or Ctrl + D to exit):\n\n`expect -i`\n
export
export(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training export(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR S | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT EXPORT(1P) POSIX Programmer's Manual EXPORT(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top export set the export attribute for variables SYNOPSIS top export name[=word]... export -p DESCRIPTION top The shell shall give the export attribute to the variables corresponding to the specified names, which shall cause them to be in the environment of subsequently executed commands. If the name of a variable is followed by =word, then the value of that variable shall be set to word. The export special built-in shall support the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. When -p is specified, export shall write to the standard output the names and values of all exported variables, in the following format: "export %s=%s\n", <name>, <value> if name is set, and: "export %s\n", <name> if name is unset. The shell shall format the output, including the proper use of quoting, so that it is suitable for reinput to the shell as commands that achieve the same exporting results, except: 1. Read-only variables with values cannot be reset. 2. Variables that were unset at the time they were output need not be reset to the unset state if a value is assigned to the variable between the time the state was saved and the time at which the saved output is reinput to the shell. When no arguments are given, the results are unspecified. OPTIONS top See the DESCRIPTION. OPERANDS top See the DESCRIPTION. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top None. ASYNCHRONOUS EVENTS top Default. STDOUT top See the DESCRIPTION. STDERR S top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top 0 All name operands were successfully exported. >0 At least one name could not be exported, or the -p option was specified and an error occurred. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top Note that, unless X was previously marked readonly, the value of "$?" after: export X=$(false) will be 0 (because export successfully set X to the empty string) and that execution continues, even if set -e is in effect. In order to detect command substitution failures, a user must separate the assignment from the export, as in: X=$(false) export X EXAMPLES top Export PWD and HOME variables: export PWD HOME Set and export the PATH variable: export PATH=/local/bin:$PATH Save and restore all exported variables: export -p > temp-file unset a lot of variables ... processing . temp-file RATIONALE top Some historical shells use the no-argument case as the functional equivalent of what is required here with -p. This feature was left unspecified because it is not historical practice in all shells, and some scripts may rely on the now-unspecified results on their implementations. Attempts to specify the -p output as the default case were unsuccessful in achieving consensus. The -p option was added to allow portable access to the values that can be saved and then later restored using; for example, a dot script. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.14, Special Built-In Utilities The Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 EXPORT(1P) Pages that refer to this page: readonly(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# export\n\n> Export shell variables to child processes.\n> More information: <https://www.gnu.org/software/bash/manual/bash.html#index-export>.\n\n- Set an environment variable:\n\n`export {{VARIABLE}}={{value}}`\n\n- Unset an environment variable:\n\n`export -n {{VARIABLE}}`\n\n- Export a function to child processes:\n\n`export -f {{FUNCTION_NAME}}`\n\n- Append a pathname to the environment variable `PATH`:\n\n`export PATH=$PATH:{{path/to/append}}`\n
expr
expr(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training expr(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON EXPR(1) User Commands EXPR(1) NAME top expr - evaluate expressions SYNOPSIS top expr EXPRESSION expr OPTION DESCRIPTION top --help display this help and exit --version output version information and exit Print the value of EXPRESSION to standard output. A blank line below separates increasing precedence groups. EXPRESSION may be: ARG1 | ARG2 ARG1 if it is neither null nor 0, otherwise ARG2 ARG1 & ARG2 ARG1 if neither argument is null or 0, otherwise 0 ARG1 < ARG2 ARG1 is less than ARG2 ARG1 <= ARG2 ARG1 is less than or equal to ARG2 ARG1 = ARG2 ARG1 is equal to ARG2 ARG1 != ARG2 ARG1 is unequal to ARG2 ARG1 >= ARG2 ARG1 is greater than or equal to ARG2 ARG1 > ARG2 ARG1 is greater than ARG2 ARG1 + ARG2 arithmetic sum of ARG1 and ARG2 ARG1 - ARG2 arithmetic difference of ARG1 and ARG2 ARG1 * ARG2 arithmetic product of ARG1 and ARG2 ARG1 / ARG2 arithmetic quotient of ARG1 divided by ARG2 ARG1 % ARG2 arithmetic remainder of ARG1 divided by ARG2 STRING : REGEXP anchored pattern match of REGEXP in STRING match STRING REGEXP same as STRING : REGEXP substr STRING POS LENGTH substring of STRING, POS counted from 1 index STRING CHARS index in STRING where any CHARS is found, or 0 length STRING length of STRING + TOKEN interpret TOKEN as a string, even if it is a keyword like 'match' or an operator like '/' ( EXPRESSION ) value of EXPRESSION Beware that many operators need to be escaped or quoted for shells. Comparisons are arithmetic if both ARGs are numbers, else lexicographical. Pattern matches return the string matched between \( and \) or null; if \( and \) are not used, they return the number of characters matched or 0. Exit status is 0 if EXPRESSION is neither null nor 0, 1 if EXPRESSION is null or 0, 2 if EXPRESSION is syntactically invalid, and 3 if an error occurred. AUTHOR top Written by Mike Parker, James Youngman, and Paul Eggert. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/expr> or available locally via: info '(coreutils) expr invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 EXPR(1) Pages that refer to this page: sysconf(3) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# expr\n\n> Evaluate expressions and manipulate strings.\n> More information: <https://www.gnu.org/software/coreutils/expr>.\n\n- Get the length of a specific string:\n\n`expr length "{{string}}"`\n\n- Get the substring of a string with a specific length:\n\n`expr substr "{{string}}" {{from}} {{length}}`\n\n- Match a specific substring against an anchored pattern:\n\n`expr match "{{string}}" '{{pattern}}'`\n\n- Get the first char position from a specific set in a string:\n\n`expr index "{{string}}" "{{chars}}"`\n\n- Calculate a specific mathematic expression:\n\n`expr {{expression1}} {{+|-|*|/|%}} {{expression2}}`\n\n- Get the first expression if its value is non-zero and not null otherwise get the second one:\n\n`expr {{expression1}} \| {{expression2}}`\n\n- Get the first expression if both expressions are non-zero and not null otherwise get zero:\n\n`expr {{expression1}} \& {{expression2}}`\n
factor
factor(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training factor(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON FACTOR(1) User Commands FACTOR(1) NAME top factor - factor numbers SYNOPSIS top factor [OPTION] [NUMBER]... DESCRIPTION top Print the prime factors of each specified integer NUMBER. If none are specified on the command line, read them from standard input. -h, --exponents print repeated factors in form p^e unless e is 1 --help display this help and exit --version output version information and exit AUTHOR top Written by Paul Rubin, Torbjorn Granlund, and Niels Moller. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/factor> or available locally via: info '(coreutils) factor invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 FACTOR(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# factor\n\n> Prints the prime factorization of a number.\n> More information: <https://www.gnu.org/software/coreutils/factor>.\n\n- Display the prime-factorization of a number:\n\n`factor {{number}}`\n\n- Take the input from `stdin` if no argument is specified:\n\n`echo {{number}} | factor`\n
faillock
faillock(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training faillock(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | SEE ALSO | AUTHOR | COLOPHON FAILLOCK(8) Linux-PAM Manual FAILLOCK(8) NAME top faillock - Tool for displaying and modifying the authentication failure record files SYNOPSIS top faillock [--dir /path/to/tally-directory] [--user username] [--reset] DESCRIPTION top The pam_faillock.so module maintains a list of failed authentication attempts per user during a specified interval and locks the account in case there were more than deny consecutive failed authentications. It stores the failure records into per-user files in the tally directory. The faillock command is an application which can be used to examine and modify the contents of the tally files. It can display the recent failed authentication attempts of the username or clear the tally files of all or individual usernames. OPTIONS top --conf /path/to/config-file The file where the configuration is located. The default is /etc/security/faillock.conf. --dir /path/to/tally-directory The directory where the user files with the failure records are kept. The priority to set this option is to use the value provided from the command line. If this isn't provided, then the value from the configuration file is used. Finally, if neither of them has been provided, then /var/run/faillock is used. --user username The user whose failure records should be displayed or cleared. --reset Instead of displaying the user's failure records, clear them. FILES top /var/run/faillock/* the files logging the authentication failures for users SEE ALSO top pam_faillock(8), pam(8) AUTHOR top faillock was written by Tomas Mraz. COLOPHON top This page is part of the linux-pam (Pluggable Authentication Modules for Linux) project. Information about the project can be found at http://www.linux-pam.org/. If you have a bug report for this manual page, see //www.linux-pam.org/. This page was obtained from the project's upstream Git repository https://github.com/linux-pam/linux-pam.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-18.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Linux-PAM Manual 12/22/2023 FAILLOCK(8) Pages that refer to this page: faillock.conf(5), pam_faillock(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# faillock\n\n> Display and modify authentication failure record files.\n> More information: <https://manned.org/faillock>.\n\n- List login failures of all users:\n\n`sudo faillock`\n\n- List login failures of the specified user:\n\n`sudo faillock --user {{user}}`\n\n- Reset the failure records of the specified user:\n\n`sudo faillock --user {{user}} --reset`\n
fallocate
fallocate(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fallocate(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY FALLOCATE(1) User Commands FALLOCATE(1) NAME top fallocate - preallocate or deallocate space to a file SYNOPSIS top fallocate [-c|-p|-z] [-o offset] -l length [-n] filename fallocate -d [-o offset] [-l length] filename fallocate -x [-o offset] -l length filename DESCRIPTION top fallocate is used to manipulate the allocated disk space for a file, either to deallocate or preallocate it. For filesystems which support the fallocate(2) system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeroes. The exit status returned by fallocate is 0 on success and 1 on failure. OPTIONS top The length and offset arguments may be followed by the multiplicative suffixes KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB, and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB") or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, TB, PB, EB, ZB, and YB. The options --collapse-range, --dig-holes, --punch-hole, and --zero-range are mutually exclusive. -c, --collapse-range Removes a byte range from a file, without leaving a hole. The byte range to be collapsed starts at offset and continues for length bytes. At the completion of the operation, the contents of the file starting at the location offset+length will be appended at the location offset, and the file will be length bytes smaller. The option --keep-size may not be specified for the collapse-range operation. Available since Linux 3.15 for ext4 (only for extent-based files) and XFS. A filesystem may place limitations on the granularity of the operation, in order to ensure efficient implementation. Typically, offset and length must be a multiple of the filesystem logical block size, which varies according to the filesystem type and configuration. If a filesystem has such a requirement, the operation will fail with the error EINVAL if this requirement is violated. -d, --dig-holes Detect and dig holes. This makes the file sparse in-place, without using extra disk space. The minimum size of the hole depends on filesystem I/O block size (usually 4096 bytes). Also, when using this option, --keep-size is implied. If no range is specified by --offset and --length, then the entire file is analyzed for holes. You can think of this option as doing a "cp --sparse" and then renaming the destination file to the original, without the need for extra disk space. See --punch-hole for a list of supported filesystems. -i, --insert-range Insert a hole of length bytes from offset, shifting existing data. -l, --length length Specifies the length of the range, in bytes. -n, --keep-size Do not modify the apparent length of the file. This may effectively allocate blocks past EOF, which can be removed with a truncate. -o, --offset offset Specifies the beginning offset of the range, in bytes. -p, --punch-hole Deallocates space (i.e., creates a hole) in the byte range starting at offset and continuing for length bytes. Within the specified range, partial filesystem blocks are zeroed, and whole filesystem blocks are removed from the file. After a successful call, subsequent reads from this range will return zeroes. This option may not be specified at the same time as the --zero-range option. Also, when using this option, --keep-size is implied. Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0), Btrfs (since Linux 3.7), tmpfs (since Linux 3.5) and gfs2 (since Linux 4.16). -v, --verbose Enable verbose mode. -x, --posix Enable POSIX operation mode. In that mode allocation operation always completes, but it may take longer time when fast allocation is not supported by the underlying filesystem. -z, --zero-range Zeroes space in the byte range starting at offset and continuing for length bytes. Within the specified range, blocks are preallocated for the regions that span the holes in the file. After a successful call, subsequent reads from this range will return zeroes. Zeroing is done within the filesystem preferably by converting the range into unwritten extents. This approach means that the specified range will not be physically zeroed out on the device (except for partial blocks at the either end of the range), and I/O is (otherwise) required only to update metadata. Option --keep-size can be specified to prevent file length modification. Available since Linux 3.14 for ext4 (only for extent-based files) and XFS. -h, --help Display help text and exit. -V, --version Print version and exit. AUTHORS top Eric Sandeen <sandeen@redhat.com>, Karel Zak <kzak@redhat.com> SEE ALSO top truncate(1), fallocate(2), posix_fallocate(3) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The fallocate command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 FALLOCATE(1) Pages that refer to this page: fallocate(2), posix_fallocate(3), swapon(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fallocate\n\n> Reserve or deallocate disk space to files.\n> The utility allocates space without zeroing.\n> More information: <https://manned.org/fallocate>.\n\n- Reserve a file taking up 700 MiB of disk space:\n\n`fallocate --length {{700M}} {{path/to/file}}`\n\n- Shrink an already allocated file by 200 MiB:\n\n`fallocate --collapse-range --length {{200M}} {{path/to/file}}`\n\n- Shrink 20 MB of space after 100 MiB in a file:\n\n`fallocate --collapse-range --offset {{100M}} --length {{20M}} {{path/to/file}}`\n
false
false(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training false(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON FALSE(1) User Commands FALSE(1) NAME top false - do nothing, unsuccessfully SYNOPSIS top false [ignored command line arguments] false OPTION DESCRIPTION top Exit with a status code indicating failure. --help display this help and exit --version output version information and exit NOTE: your shell may have its own version of false, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports. AUTHOR top Written by Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/false> or available locally via: info '(coreutils) false invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 FALSE(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# false\n\n> Returns a non-zero exit code.\n> More information: <https://www.gnu.org/software/coreutils/false>.\n\n- Return a non-zero exit code:\n\n`false`\n
fatlabel
fatlabel(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fatlabel(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | COMPATIBILITY and BUGS | DOS CODEPAGES | SEE ALSO | HOMEPAGE | AUTHORS | COLOPHON FATLABEL(8) System Manager's Manual FATLABEL(8) NAME top fatlabel - set or get MS-DOS filesystem label or volume ID SYNOPSIS top fatlabel [OPTIONS] DEVICE [NEW] DESCRIPTION top fatlabel will display or change the volume label or volume ID on the MS-DOS filesystem located on DEVICE. By default it works in label mode. It can be switched to volume ID mode with the option -i or --volume-id. If NEW is omitted, then the existing label or volume ID is written to the standard output. A label can't be longer than 11 bytes and should be in all upper case for best compatibility. An empty string or a label consisting only of white space is not allowed. A volume ID must be given as a hexadecimal number (no leading "0x" or similar) and must fit into 32 bits. OPTIONS top -i, --volume-id Switch to volume ID mode. -r, --reset Remove label in label mode or generate new ID in volume ID mode. -c PAGE, --codepage=PAGE Use DOS codepage PAGE to encode/decode label. By default codepage 850 is used. -h, --help Display a help message and terminate. -V, --version Show version number and terminate. COMPATIBILITY and BUGS top For historic reasons FAT label is stored in two different locations: in the boot sector and as a special volume label entry in the root directory. MS-DOS 5.00, MS-DOS 6.22, MS-DOS 7.10, Windows 98, Windows XP and also Windows 10 read FAT label only from the root directory. Absence of the volume label in the root directory is interpreted as empty or none label, even if boot sector contains some valid label. When Windows XP or Windows 10 system changes a FAT label it stores it only in the root directory letting boot sector unchanged. Which leads to problems when a label is removed on Windows. Old label is still stored in the boot sector but is removed from the root directory. dosfslabel prior to the version 3.0.7 operated only with FAT labels stored in the boot sector, completely ignoring a volume label in the root directory. dosfslabel in versions 3.0.73.0.15 reads FAT labels from the root directory and in case of absence, it fallbacks to a label stored in the boot sector. Change operation resulted in updating a label in the boot sector and sometimes also in the root directory due to the bug. That bug was fixed in dosfslabel version 3.0.16 and since this version dosfslabel updates label in both location. Since version 4.2, fatlabel reads a FAT label only from the root directory (like MS-DOS and Windows systems), but changes a FAT label in both locations. In version 4.2 was fixed handling of empty labels and labels which starts with a byte 0xE5. Also in this version was added support for non-ASCII labels according to the specified DOS codepage and were added checks if a new label is valid. It is strongly suggested to not use dosfslabel prior to version 3.0.16. DOS CODEPAGES top MS-DOS and Windows systems use DOS (OEM) codepage for encoding and decoding FAT label. In Windows systems DOS codepage is global for all running applications and cannot be configured explicitly. It is set implicitly by option Language for non- Unicode programs available in Regional and Language Options via Control Panel. Default DOS codepage for fatlabel is 850. See following mapping table between DOS codepage and Language for non-Unicode programs: Codepage Language 437 English (India), English (Malaysia), English (Republic of the Philippines), English (Singapore), English (South Africa), English (United States), English (Zimbabwe), Filipino, Hausa, Igbo, Inuktitut, Kinyarwanda, Kiswahili, Yoruba 720 Arabic, Dari, Persian, Urdu, Uyghur 737 Greek 775 Estonian, Latvian, Lithuanian 850 Afrikaans, Alsatian, Basque, Breton, Catalan, Corsican, Danish, Dutch, English (Australia), English (Belize), English (Canada), English (Caribbean), English (Ireland), English (Jamaica), English (New Zealand), English (Trinidad and Tobago), English (United Kingdom), Faroese, Finnish, French, Frisian, Galician, German, Greenlandic, Icelandic, Indonesian, Irish, isiXhosa, isiZulu, Italian, K'iche, Lower Sorbian, Luxembourgish, Malay, Mapudungun, Mohawk, Norwegian, Occitan, Portuguese, Quechua, Romansh, Sami, Scottish Gaelic, Sesotho sa Leboa, Setswana, Spanish, Swedish, Tamazight, Upper Sorbian, Welsh, Wolof 852 Albanian, Bosnian (Latin), Croatian, Czech, Hungarian, Polish, Romanian, Serbian (Latin), Slovak, Slovenian, Turkmen 855 Bosnian (Cyrillic), Serbian (Cyrillic) 857 Azeri (Latin), Turkish, Uzbek (Latin) 862 Hebrew 866 Azeri (Cyrillic), Bashkir, Belarusian, Bulgarian, Kyrgyz, Macedonian, Mongolian, Russian, Tajik, Tatar, Ukrainian, Uzbek (Cyrillic), Yakut 874 Thai 932 Japanese 936 Chinese (Simplified) 949 Korean 950 Chinese (Traditional) 1258 Vietnamese SEE ALSO top fsck.fat(8), mkfs.fat(8) HOMEPAGE top The home for the dosfstools project is its GitHub project page https://github.com/dosfstools/dosfstools. AUTHORS top dosfstools were written by Werner Almesberger werner.almesberger@lrc.di.epfl.ch, Roman Hodek Roman.Hodek@ informatik.uni-erlangen.de, and others. Current maintainers are Andreas Bombe aeb@debian.org and Pali Rohr pali.rohar@ gmail.com. COLOPHON top This page is part of the dosfstools (Tools for making and checking MS-DOS FAT filesystems) project. Information about the project can be found at https://github.com/dosfstools/dosfstools. If you have a bug report for this manual page, see https://github.com/dosfstools/dosfstools/issues. This page was obtained from the project's upstream Git repository https://github.com/dosfstools/dosfstools.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-10.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org dosfstools 4.2+git 2021-01-31 FATLABEL(8) Pages that refer to this page: fstab(5), dosfsck(8), fsck.fat(8), fsck.msdos(8), fsck.vfat(8), mkdosfs(8), mkfs.fat(8), mkfs.msdos(8), mkfs.vfat(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fatlabel\n\n> Sets or gets the label of a FAT32 partition.\n> More information: <https://manned.org/fatlabel>.\n\n- Get the label of a FAT32 partition:\n\n`fatlabel {{/dev/sda1}}`\n\n- Set the label of a FAT32 partition:\n\n`fatlabel {{/dev/sdc3}} "{{new_label}}"`\n
fc
fc(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fc(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT FC(1P) POSIX Programmer's Manual FC(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top fc process the command history list SYNOPSIS top fc [-r] [-e editor] [first [last]] fc -l [-nr] [first [last]] fc -s [old=new] [first] DESCRIPTION top The fc utility shall list, or shall edit and re-execute, commands previously entered to an interactive sh. The command history list shall reference commands by number. The first number in the list is selected arbitrarily. The relationship of a number to its command shall not change except when the user logs in and no other process is accessing the list, at which time the system may reset the numbering to start the oldest retained command at another number (usually 1). When the number reaches an implementation-defined upper limit, which shall be no smaller than the value in HISTSIZE or 32767 (whichever is greater), the shell may wrap the numbers, starting the next command with a lower number (usually 1). However, despite this optional wrapping of numbers, fc shall maintain the time-ordering sequence of the commands. For example, if four commands in sequence are given the numbers 32766, 32767, 1 (wrapped), and 2 as they are executed, command 32767 is considered the command previous to 1, even though its number is higher. When commands are edited (when the -l option is not specified), the resulting lines shall be entered at the end of the history list and then re-executed by sh. The fc command that caused the editing shall not be entered into the history list. If the editor returns a non-zero exit status, this shall suppress the entry into the history list and the command re-execution. Any command line variable assignments or redirection operators used with fc shall affect both the fc command itself as well as the command that results; for example: fc -s -- -1 2>/dev/null reinvokes the previous command, suppressing standard error for both fc and the previous command. OPTIONS top The fc utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -e editor Use the editor named by editor to edit the commands. The editor string is a utility name, subject to search via the PATH variable (see the Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables). The value in the FCEDIT variable shall be used as a default when -e is not specified. If FCEDIT is null or unset, ed shall be used as the editor. -l (The letter ell.) List the commands rather than invoking an editor on them. The commands shall be written in the sequence indicated by the first and last operands, as affected by -r, with each command preceded by the command number. -n Suppress command numbers when listing with -l. -r Reverse the order of the commands listed (with -l) or edited (with neither -l nor -s). -s Re-execute the command without invoking an editor. OPERANDS top The following operands shall be supported: first, last Select the commands to list or edit. The number of previous commands that can be accessed shall be determined by the value of the HISTSIZE variable. The value of first or last or both shall be one of the following: [+]number A positive number representing a command number; command numbers can be displayed with the -l option. -number A negative decimal number representing the command that was executed number of commands previously. For example, -1 is the immediately previous command. string A string indicating the most recently entered command that begins with that string. If the old=new operand is not also specified with -s, the string form of the first operand cannot contain an embedded <equals-sign>. When the synopsis form with -s is used: * If first is omitted, the previous command shall be used. For the synopsis forms without -s: * If last is omitted, last shall default to the previous command when -l is specified; otherwise, it shall default to first. * If first and last are both omitted, the previous 16 commands shall be listed or the previous single command shall be edited (based on the -l option). * If first and last are both present, all of the commands from first to last shall be edited (without -l) or listed (with -l). Editing multiple commands shall be accomplished by presenting to the editor all of the commands at one time, each command starting on a new line. If first represents a newer command than last, the commands shall be listed or edited in reverse sequence, equivalent to using -r. For example, the following commands on the first line are equivalent to the corresponding commands on the second: fc -r 10 20 fc 30 40 fc 20 10 fc -r 40 30 * When a range of commands is used, it shall not be an error to specify first or last values that are not in the history list; fc shall substitute the value representing the oldest or newest command in the list, as appropriate. For example, if there are only ten commands in the history list, numbered 1 to 10: fc -l fc 1 99 shall list and edit, respectively, all ten commands. old=new Replace the first occurrence of string old in the commands to be re-executed by the string new. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of fc: FCEDIT This variable, when expanded by the shell, shall determine the default value for the -e editor option's editor option-argument. If FCEDIT is null or unset, ed shall be used as the editor. HISTFILE Determine a pathname naming a command history file. If the HISTFILE variable is not set, the shell may attempt to access or create a file .sh_history in the directory referred to by the HOME environment variable. If the shell cannot obtain both read and write access to, or create, the history file, it shall use an unspecified mechanism that allows the history to operate properly. (References to history ``file'' in this section shall be understood to mean this unspecified mechanism in such cases.) An implementation may choose to access this variable only when initializing the history file; this initialization shall occur when fc or sh first attempt to retrieve entries from, or add entries to, the file, as the result of commands issued by the user, the file named by the ENV variable, or implementation- defined system start-up files. In some historical shells, the history file is initialized just after the ENV file has been processed. Therefore, it is implementation-defined whether changes made to HISTFILE after the history file has been initialized are effective. Implementations may choose to disable the history list mechanism for users with appropriate privileges who do not set HISTFILE; the specific circumstances under which this occurs are implementation-defined. If more than one instance of the shell is using the same history file, it is unspecified how updates to the history file from those shells interact. As entries are deleted from the history file, they shall be deleted oldest first. It is unspecified when history file entries are physically removed from the history file. HISTSIZE Determine a decimal number representing the limit to the number of previous commands that are accessible. If this variable is unset, an unspecified default greater than or equal to 128 shall be used. The maximum number of commands in the history list is unspecified, but shall be at least 128. An implementation may choose to access this variable only when initializing the history file, as described under HISTFILE. Therefore, it is unspecified whether changes made to HISTSIZE after the history file has been initialized are effective. LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top Default. STDOUT top When the -l option is used to list commands, the format of each command in the list shall be as follows: "%d\t%s\n", <line number>, <command> If both the -l and -n options are specified, the format of each command shall be: "\t%s\n", <command> If the <command> consists of more than one line, the lines after the first shall be displayed as: "\t%s\n", <continued-command> STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 Successful completion of the listing. >0 An error occurred. Otherwise, the exit status shall be that of the commands executed by fc. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top Since editors sometimes use file descriptors as integral parts of their editing, redirecting their file descriptors as part of the fc command can produce unexpected results. For example, if vi is the FCEDIT editor, the command: fc -s | more does not work correctly on many systems. Users on windowing systems may want to have separate history files for each window by setting HISTFILE as follows: HISTFILE=$HOME/.sh_hist$$ EXAMPLES top None. RATIONALE top This utility is based on the fc built-in of the KornShell. An early proposal specified the -e option as [-e editor [old= new ]], which is not historical practice. Historical practice in fc of either [-e editor] or [-e - [ old= new ]] is acceptable, but not both together. To clarify this, a new option -s was introduced replacing the [-e -]. This resolves the conflict and makes fc conform to the Utility Syntax Guidelines. HISTFILE Some implementations of the KornShell check for the superuser and do not create a history file unless HISTFILE is set. This is done primarily to avoid creating unlinked files in the root file system when logging in during single-user mode. HISTFILE must be set for the superuser to have history. HISTSIZE Needed to limit the size of history files. It is the intent of the standard developers that when two shells share the same history file, commands that are entered in one shell shall be accessible by the other shell. Because of the difficulties of synchronization over a network, the exact nature of the interaction is unspecified. The initialization process for the history file can be dependent on the system start-up files, in that they may contain commands that effectively preempt the settings the user has for HISTFILE and HISTSIZE. For example, function definition commands are recorded in the history file. If the system administrator includes function definitions in some system start-up file called before the ENV file, the history file is initialized before the user can influence its characteristics. In some historical shells, the history file is initialized just after the ENV file has been processed. Because of these situations, the text requires the initialization process to be implementation-defined. Consideration was given to omitting the fc utility in favor of the command line editing feature in sh. For example, in vi editing mode, typing "<ESC>v" is equivalent to: EDITOR=vi fc However, the fc utility allows the user the flexibility to edit multiple commands simultaneously (such as fc 10 20) and to use editors other than those supported by sh for command line editing. In the KornShell, the alias r (``re-do'') is preset to fc -e - (equivalent to the POSIX fc -s). This is probably an easier command name to remember than fc (``fix command''), but it does not meet the Utility Syntax Guidelines. Renaming fc to hist or redo was considered, but since this description closely matches historical KornShell practice already, such a renaming was seen as gratuitous. Users are free to create aliases whenever odd historical names such as fc, awk, cat, grep, or yacc are standardized by POSIX. Command numbers have no ordering effects; they are like serial numbers. The -r option and -number operand address the sequence of command execution, regardless of serial numbers. So, for example, if the command number wrapped back to 1 at some arbitrary point, there would be no ambiguity associated with traversing the wrap point. For example, if the command history were: 32766: echo 1 32767: echo 2 1: echo 3 the number -2 refers to command 32767 because it is the second previous command, regardless of serial number. FUTURE DIRECTIONS top None. SEE ALSO top sh(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 FC(1P) Pages that refer to this page: sh(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fc\n\n> Open the most recent command and edit it.\n> More information: <https://manned.org/fc>.\n\n- Open in the default system editor:\n\n`fc`\n\n- Specify an editor to open with:\n\n`fc -e {{'emacs'}}`\n\n- List recent commands from history:\n\n`fc -l`\n\n- List recent commands in reverse order:\n\n`fc -l -r`\n\n- List commands in a given interval:\n\n`fc '{{416}}' '{{420}}'`\n
fdisk
fdisk(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fdisk(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | DEVICES | SIZES | SCRIPT FILES | DISK LABELS | DOS MODE AND DOS 6.X WARNING | COLORS | ENVIRONMENT | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY FDISK(8) System Administration FDISK(8) NAME top fdisk - manipulate disk partition table SYNOPSIS top fdisk [options] device fdisk -l [device...] DESCRIPTION top fdisk is a dialog-driven program for creation and manipulation of partition tables. It understands GPT, MBR, Sun, SGI and BSD partition tables. Block devices can be divided into one or more logical disks called partitions. This division is recorded in the partition table, usually found in sector 0 of the disk. (In the BSD world one talks about `disk slices' and a `disklabel'.) All partitioning is driven by device I/O limits (the topology) by default. fdisk is able to optimize the disk layout for a 4K-sector size and use an alignment offset on modern devices for MBR and GPT. It is always a good idea to follow fdisk's defaults as the default values (e.g., first and last partition sectors) and partition sizes specified by the +/-<size>{M,G,...} notation are always aligned according to the device properties. CHS (Cylinder-Head-Sector) addressing is deprecated and not used by default. Please, do not follow old articles and recommendations with fdisk -S <n> -H <n> advices for SSD or 4K-sector devices. Note that partx(8) provides a rich interface for scripts to print disk layouts, fdisk is mostly designed for humans. Backward compatibility in the output of fdisk is not guaranteed. The input (the commands) should always be backward compatible. OPTIONS top -b, --sector-size sectorsize Specify the sector size of the disk. Valid values are 512, 1024, 2048, and 4096. (Recent kernels know the sector size. Use this option only on old kernels or to override the kernels ideas.) Since util-linux-2.17, fdisk differentiates between logical and physical sector size. This option changes both sector sizes to sectorsize. -B, --protect-boot Dont erase the beginning of the first disk sector when creating a new disk label. This feature is supported for GPT and MBR. -c, --compatibility[=mode] Specify the compatibility mode, 'dos' or 'nondos'. The default is non-DOS mode. For backward compatibility, it is possible to use the option without the mode argument then the default is used. Note that the optional mode argument cannot be separated from the -c option by a space, the correct form is for example -c=dos. -h, --help Display help text and exit. -V, --version Print version and exit. -L, --color[=when] Colorize the output. The optional argument when can be auto, never or always. If the when argument is omitted, it defaults to auto. The colors can be disabled; for the current built-in default see the --help output. See also the COLORS section. -l, --list List the partition tables for the specified devices and then exit. If no devices are given, the devices mentioned in /proc/partitions (if this file exists) are used. Devices are always listed in the order in which they are specified on the command-line, or by the kernel listed in /proc/partitions. -x, --list-details Like --list, but provides more details. --lock[=mode] Use exclusive BSD lock for device or file it operates. The optional argument mode can be yes, no (or 1 and 0) or nonblock. If the mode argument is omitted, it defaults to yes. This option overwrites environment variable $LOCK_BLOCK_DEVICE. The default is not to use any lock at all, but its recommended to avoid collisions with systemd-udevd(8) or other tools. -n, --noauto-pt Dont automatically create a default partition table on empty device. The partition table has to be explicitly created by user (by command like 'o', 'g', etc.). -o, --output list Specify which output columns to print. Use --help to get a list of all supported columns. The default list of columns may be extended if list is specified in the format +list (e.g., -o +UUID). -s, --getsz Print the size in 512-byte sectors of each given block device. This option is DEPRECATED in favour of blockdev(8). -t, --type type Enable support only for disklabels of the specified type, and disable support for all other types. -u, --units[=unit] When listing partition tables, show sizes in 'sectors' or in 'cylinders'. The default is to show sizes in sectors. For backward compatibility, it is possible to use the option without the unit argument then the default is used. Note that the optional unit argument cannot be separated from the -u option by a space, the correct form is for example '-u=cylinders'. -C, --cylinders number Specify the number of cylinders of the disk. I have no idea why anybody would want to do so. -H, --heads number Specify the number of heads of the disk. (Not the physical number, of course, but the number used for partition tables.) Reasonable values are 255 and 16. -S, --sectors number Specify the number of sectors per track of the disk. (Not the physical number, of course, but the number used for partition tables.) A reasonable value is 63. -w, --wipe when Wipe filesystem, RAID and partition-table signatures from the device, in order to avoid possible collisions. The argument when can be auto, never or always. When this option is not given, the default is auto, in which case signatures are wiped only when in interactive mode. In all cases detected signatures are reported by warning messages before a new partition table is created. See also wipefs(8) command. -W, --wipe-partitions when Wipe filesystem, RAID and partition-table signatures from a newly created partitions, in order to avoid possible collisions. The argument when can be auto, never or always. When this option is not given, the default is auto, in which case signatures are wiped only when in interactive mode and after confirmation by user. In all cases detected signatures are reported by warning messages before a new partition is created. See also wipefs(8) command. -V, --version Display version information and exit. DEVICES top The device is usually /dev/sda, /dev/sdb or so. A device name refers to the entire disk. Old systems without libata (a library used inside the Linux kernel to support ATA host controllers and devices) make a difference between IDE and SCSI disks. In such cases the device name will be /dev/hd* (IDE) or /dev/sd* (SCSI). The partition is a device name followed by a partition number. For example, /dev/sda1 is the first partition on the first hard disk in the system. See also Linux kernel documentation (the Documentation/admin-guide/devices.txt file). SIZES top The "last sector" dialog accepts partition size specified by number of sectors or by +/-<size>{K,B,M,G,...} notation. If the size is prefixed by '+' then it is interpreted as relative to the partition first sector. If the size is prefixed by '-' then it is interpreted as relative to the high limit (last available sector for the partition). In the case the size is specified in bytes than the number may be followed by the multiplicative suffixes KiB=1024, MiB=1024*1024, and so on for GiB, TiB, PiB, EiB, ZiB and YiB. The "iB" is optional, e.g., "K" has the same meaning as "KiB". The relative sizes are always aligned according to device I/O limits. The +/-<size>{K,B,M,G,...} notation is recommended. For backward compatibility fdisk also accepts the suffixes KB=1000, MB=1000*1000, and so on for GB, TB, PB, EB, ZB and YB. These 10^N suffixes are deprecated. SCRIPT FILES top fdisk allows reading (by 'I' command) sfdisk(8) compatible script files. The script is applied to in-memory partition table, and then it is possible to modify the partition table before you write it to the device. And vice-versa it is possible to write the current in-memory disk layout to the script file by command 'O'. The script files are compatible between cfdisk(8), sfdisk(8), fdisk and other libfdisk applications. For more details see sfdisk(8). DISK LABELS top GPT (GUID Partition Table) GPT is modern standard for the layout of the partition table. GPT uses 64-bit logical block addresses, checksums, UUIDs and names for partitions and an unlimited number of partitions (although the number of partitions is usually restricted to 128 in many partitioning tools). Note that the first sector is still reserved for a protective MBR in the GPT specification. It prevents MBR-only partitioning tools from mis-recognizing and overwriting GPT disks. GPT is always a better choice than MBR, especially on modern hardware with a UEFI boot loader. DOS-type (MBR) A DOS-type partition table can describe an unlimited number of partitions. In sector 0 there is room for the description of 4 partitions (called `primary'). One of these may be an extended partition; this is a box holding logical partitions, with descriptors found in a linked list of sectors, each preceding the corresponding logical partitions. The four primary partitions, present or not, get numbers 1-4. Logical partitions are numbered starting from 5. In a DOS-type partition table the starting offset and the size of each partition is stored in two ways: as an absolute number of sectors (given in 32 bits), and as a Cylinders/Heads/Sectors triple (given in 10+8+6 bits). The former is OK with 512-byte sectors this will work up to 2 TB. The latter has two problems. First, these C/H/S fields can be filled only when the number of heads and the number of sectors per track are known. And second, even if we know what these numbers should be, the 24 bits that are available do not suffice. DOS uses C/H/S only, Windows uses both, Linux never uses C/H/S. The C/H/S addressing is deprecated and may be unsupported in some later fdisk version. Please, read the DOS-mode section if you want DOS-compatible partitions. fdisk does not care about cylinder boundaries by default. BSD/Sun-type A BSD/Sun disklabel can describe 8 partitions, the third of which should be a `whole disk' partition. Do not start a partition that actually uses its first sector (like a swap partition) at cylinder 0, since that will destroy the disklabel. Note that a BSD label is usually nested within a DOS partition. IRIX/SGI-type An IRIX/SGI disklabel can describe 16 partitions, the eleventh of which should be an entire `volume' partition, while the ninth should be labeled `volume header'. The volume header will also cover the partition table, i.e., it starts at block zero and extends by default over five cylinders. The remaining space in the volume header may be used by header directory entries. No partitions may overlap with the volume header. Also do not change its type or make some filesystem on it, since you will lose the partition table. Use this type of label only when working with Linux on IRIX/SGI machines or IRIX/SGI disks under Linux. A sync(2) and an ioctl(BLKRRPART) (rereading the partition table from disk) are performed before exiting when the partition table has been updated. DOS MODE AND DOS 6.X WARNING top Note that all this is deprecated. You dont have to care about things like geometry and cylinders on modern operating systems. If you really want DOS-compatible partitioning then you have to enable DOS mode and cylinder units by using the '-c=dos -u=cylinders' fdisk command-line options. The DOS 6.x FORMAT command looks for some information in the first sector of the data area of the partition, and treats this information as more reliable than the information in the partition table. DOS FORMAT expects DOS FDISK to clear the first 512 bytes of the data area of a partition whenever a size change occurs. DOS FORMAT will look at this extra information even if the /U flag is given we consider this a bug in DOS FORMAT and DOS FDISK. The bottom line is that if you use fdisk or cfdisk(8) to change the size of a DOS partition table entry, then you must also use dd(1) to zero the first 512 bytes of that partition before using DOS FORMAT to format the partition. For example, if you were using fdisk to make a DOS partition table entry for /dev/sda1, then (after exiting fdisk and rebooting Linux so that the partition table information is valid) you would use the command dd if=/dev/zero of=/dev/sda1 bs=512 count=1 to zero the first 512 bytes of the partition. fdisk usually obtains the disk geometry automatically. This is not necessarily the physical disk geometry (indeed, modern disks do not really have anything like a physical geometry, certainly not something that can be described in the simplistic Cylinders/Heads/Sectors form), but it is the disk geometry that MS-DOS uses for the partition table. Usually all goes well by default, and there are no problems if Linux is the only system on the disk. However, if the disk has to be shared with other operating systems, it is often a good idea to let an fdisk from another operating system make at least one partition. When Linux boots it looks at the partition table, and tries to deduce what (fake) geometry is required for good cooperation with other systems. Whenever a partition table is printed out in DOS mode, a consistency check is performed on the partition table entries. This check verifies that the physical and logical start and end points are identical, and that each partition starts and ends on a cylinder boundary (except for the first partition). Some versions of MS-DOS create a first partition which does not begin on a cylinder boundary, but on sector 2 of the first cylinder. Partitions beginning in cylinder 1 cannot begin on a cylinder boundary, but this is unlikely to cause difficulty unless you have OS/2 on your machine. For best results, you should always use an OS-specific partition table program. For example, you should make DOS partitions with the DOS FDISK program and Linux partitions with the Linux fdisk or Linux cfdisk(8) programs. COLORS top The output colorization is implemented by terminal-colors.d(5) functionality. Implicit coloring can be disabled by an empty file /etc/terminal-colors.d/fdisk.disable for the fdisk command or for all tools by /etc/terminal-colors.d/disable The user-specific $XDG_CONFIG_HOME/terminal-colors.d or $HOME/.config/terminal-colors.d overrides the global setting. Note that the output colorization may be enabled by default, and in this case terminal-colors.d directories do not have to exist yet. The logical color names supported by fdisk are: header The header of the output tables. help-title The help section titles. warn The warning messages. welcome The welcome message. ENVIRONMENT top FDISK_DEBUG=all enables fdisk debug output. LIBFDISK_DEBUG=all enables libfdisk debug output. LIBBLKID_DEBUG=all enables libblkid debug output. LIBSMARTCOLS_DEBUG=all enables libsmartcols debug output. LIBSMARTCOLS_DEBUG_PADDING=on use visible padding characters. LOCK_BLOCK_DEVICE=<mode> use exclusive BSD lock. The mode is "1" or "0". See --lock for more details. AUTHORS top Karel Zak <kzak@redhat.com>, Davidlohr Bueso <dave@gnu.org> The original version was written by Andries E. Brouwer, A. V. Le Blanc and others. SEE ALSO top cfdisk(8), mkfs(8), partx(8), sfdisk(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The fdisk command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 FDISK(8) Pages that refer to this page: systemd-dissect(1), addpart(8), btrfs-filesystem(8), cfdisk(8), delpart(8), mkswap(8), parted(8), partx(8), resize2fs(8), resizepart(8), sfdisk(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fdisk\n\n> Manage partition tables and partitions on a hard disk.\n> See also: `partprobe`.\n> More information: <https://manned.org/fdisk>.\n\n- List partitions:\n\n`sudo fdisk -l`\n\n- Start the partition manipulator:\n\n`sudo fdisk {{/dev/sdX}}`\n\n- Once partitioning a disk, create a partition:\n\n`n`\n\n- Once partitioning a disk, select a partition to delete:\n\n`d`\n\n- Once partitioning a disk, view the partition table:\n\n`p`\n\n- Once partitioning a disk, write the changes made:\n\n`w`\n\n- Once partitioning a disk, discard the changes made:\n\n`q`\n\n- Once partitioning a disk, open a help menu:\n\n`m`\n
fg
fg(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fg(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT FG(1P) POSIX Programmer's Manual FG(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top fg run jobs in the foreground SYNOPSIS top fg [job_id] DESCRIPTION top If job control is enabled (see the description of set -m), the fg utility shall move a background job from the current environment (see Section 2.12, Shell Execution Environment) into the foreground. Using fg to place a job into the foreground shall remove its process ID from the list of those ``known in the current shell execution environment''; see Section 2.9.3.1, Examples. OPTIONS top None. OPERANDS top The following operand shall be supported: job_id Specify the job to be run as a foreground job. If no job_id operand is given, the job_id for the job that was most recently suspended, placed in the background, or run as a background job shall be used. The format of job_id is described in the Base Definitions volume of POSIX.12017, Section 3.204, Job Control Job ID. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of fg: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top Default. STDOUT top The fg utility shall write the command line of the job to standard output in the following format: "%s\n", <command> STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. >0 An error occurred. CONSEQUENCES OF ERRORS top If job control is disabled, the fg utility shall exit with an error and no job shall be placed in the foreground. The following sections are informative. APPLICATION USAGE top The fg utility does not work as expected when it is operating in its own utility execution environment because that environment has no applicable jobs to manipulate. See the APPLICATION USAGE section for bg(1p). For this reason, fg is generally implemented as a shell regular built-in. EXAMPLES top None. RATIONALE top The extensions to the shell specified in this volume of POSIX.12017 have mostly been based on features provided by the KornShell. The job control features provided by bg, fg, and jobs are also based on the KornShell. The standard developers examined the characteristics of the C shell versions of these utilities and found that differences exist. Despite widespread use of the C shell, the KornShell versions were selected for this volume of POSIX.12017 to maintain a degree of uniformity with the rest of the KornShell features selected (such as the very popular command line editing features). FUTURE DIRECTIONS top None. SEE ALSO top Section 2.9.3.1, Examples, Section 2.12, Shell Execution Environment, bg(1p), kill(1p), jobs(1p), wait(1p) The Base Definitions volume of POSIX.12017, Section 3.204, Job Control Job ID, Chapter 8, Environment Variables COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 FG(1P) Pages that refer to this page: bg(1p), jobs(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fg\n\n> Run jobs in foreground.\n> More information: <https://manned.org/fg>.\n\n- Bring most recently suspended or running background job to foreground:\n\n`fg`\n\n- Bring a specific job to foreground:\n\n`fg %{{job_id}}`\n
file
file(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training file(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT | FILES | EXIT STATUS | EXAMPLES | SEE ALSO | STANDARDS CONFORMANCE | SECURITY | MAGIC DIRECTORY | HISTORY | LEGAL NOTICE | BUGS | TODO | AVAILABILITY | COLOPHON FILE(1) General Commands Manual FILE(1) NAME top file determine file type SYNOPSIS top [-bcdEhiklLNnprsSvzZ0] [--apple] [--exclude-quiet] [--extension] [--mime-encoding] [--mime-type] [-e testname] [-F separator] [-f namefile] [-m magicfiles] [-P name=value] file ... -C [-m magicfiles] [--help] DESCRIPTION top This manual page documents version 5.45 of the command. tests each argument in an attempt to classify it. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests. The first test that succeeds causes the file type to be printed. The type printed will usually contain one of the words text (the file contains only printing characters and a few common control characters and is probably safe to read on an ASCII terminal), executable (the file contains the result of compiling a program in a form understandable to some UNIX kernel or another), or data meaning anything else (data is usually binary or non- printable). Exceptions are well-known file formats (core files, tar archives) that are known to contain binary data. When modifying magic files or the program itself, make sure to preserve these keywords. Users depend on knowing that all the readable files in a directory have the word text printed. Don't do as Berkeley did and change shell commands text to shell script. The filesystem tests are based on examining the return from a stat(2) system call. The program checks to see if the file is empty, or if it's some sort of special file. Any known file types appropriate to the system you are running on (sockets, symbolic links, or named pipes (FIFOs) on those systems that implement them) are intuited if they are defined in the system header file <sys/stat.h>. The magic tests are used to check for files with data in particular fixed formats. The canonical example of this is a binary executable (compiled program) a.out file, whose format is defined in <elf.h>, <a.out.h> and possibly <exec.h> in the standard include directory. These files have a magic number stored in a particular place near the beginning of the file that tells the UNIX operating system that the file is a binary executable, and which of several types thereof. The concept of a magic number has been applied by extension to data files. Any file with some invariant identifier at a small fixed offset into the file can usually be described in this way. The information identifying these files is read from the compiled magic file /usr/local/share/misc/magic.mgc, or the files in the directory /usr/local/share/misc/magic if the compiled file does not exist. In addition, if $HOME/.magic.mgc or $HOME/.magic exists, it will be used in preference to the system magic files. If a file does not match any of the entries in the magic file, it is examined to see if it seems to be a text file. ASCII, ISO-8859-x, non-ISO 8-bit extended-ASCII character sets (such as those used on Macintosh and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded Unicode, and EBCDIC character sets can be distinguished by the different ranges and sequences of bytes that constitute printable text in each set. If a file passes any of these tests, its character set is reported. ASCII, ISO-8859-x, UTF-8, and extended-ASCII files are identified as text because they will be mostly readable on nearly any terminal; UTF-16 and EBCDIC are only character data because, while they contain text, it is text that will require translation before it can be read. In addition, will attempt to determine other characteristics of text-type files. If the lines of a file are terminated by CR, CRLF, or NEL, instead of the Unix-standard LF, this will be reported. Files that contain embedded escape sequences or overstriking will also be identified. Once has determined the character set used in a text-type file, it will attempt to determine in what language the file is written. The language tests look for particular strings (cf. <names.h>) that can appear anywhere in the first few blocks of a file. For example, the keyword .br indicates that the file is most likely a troff(1) input file, just as the keyword struct indicates a C program. These tests are less reliable than the previous two groups, so they are performed last. The language test routines also test for some miscellany (such as tar(1) archives, JSON files). Any file that cannot be identified as having been written in any of the character sets listed above is simply said to be data. OPTIONS top --apple Causes the command to output the file type and creator code as used by older MacOS versions. The code consists of eight letters, the first describing the file type, the latter the creator. This option works properly only for file formats that have the apple-style output defined. -b, --brief Do not prepend filenames to output lines (brief mode). -C, --compile Write a magic.mgc output file that contains a pre-parsed version of the magic file or directory. -c, --checking-printout Cause a checking printout of the parsed form of the magic file. This is usually used in conjunction with the -m option to debug a new magic file before installing it. -d Prints internal debugging information to stderr. -E On filesystem errors (file not found etc), instead of handling the error as regular output as POSIX mandates and keep going, issue an error message and exit. -e, --exclude testname Exclude the test named in testname from the list of tests made to determine the file type. Valid test names are: apptype EMX application type (only on EMX). ascii Various types of text files (this test will try to guess the text encoding, irrespective of the setting of the encoding option). encoding Different text encodings for soft magic tests. tokens Ignored for backwards compatibility. cdf Prints details of Compound Document Files. compress Checks for, and looks inside, compressed files. csv Checks Comma Separated Value files. elf Prints ELF file details, provided soft magic tests are enabled and the elf magic is found. json Examines JSON (RFC-7159) files by parsing them for compliance. soft Consults magic files. simh Examines SIMH tape files. tar Examines tar files by verifying the checksum of the 512 byte tar header. Excluding this test can provide more detailed content description by using the soft magic method. text A synonym for ascii. --exclude-quiet Like --exclude but ignore tests that does not know about. This is intended for compatibility with older versions of . --extension Print a slash-separated list of valid extensions for the file type found. -F, --separator separator Use the specified string as the separator between the filename and the file result returned. Defaults to :. -f, --files-from namefile Read the names of the files to be examined from namefile (one per line) before the argument list. Either namefile or at least one filename argument must be present; to test the standard input, use - as a filename argument. Please note that namefile is unwrapped and the enclosed filenames are processed when this option is encountered and before any further options processing is done. This allows one to process multiple lists of files with different command line arguments on the same invocation. Thus if you want to set the delimiter, you need to do it before you specify the list of files, like: -F @ -f namefile, instead of: -f namefile -F @. -h, --no-dereference This option causes symlinks not to be followed (on systems that support symbolic links). This is the default if the environment variable POSIXLY_CORRECT is not defined. -i, --mime Causes the command to output mime type strings rather than the more traditional human readable ones. Thus it may say text/plain; charset=us-ascii rather than ASCII text. --mime-type, --mime-encoding Like -i, but print only the specified element(s). -k, --keep-going Don't stop at the first match, keep going. Subsequent matches will be have the string \012- prepended. (If you want a newline, see the -r option.) The magic pattern with the highest strength (see the -l option) comes first. -l, --list Shows a list of patterns and their strength sorted descending by magic(4) strength which is used for the matching (see also the -k option). -L, --dereference This option causes symlinks to be followed, as the like- named option in ls(1) (on systems that support symbolic links). This is the default if the environment variable POSIXLY_CORRECT is defined. -m, --magic-file magicfiles Specify an alternate list of files and directories containing magic. This can be a single item, or a colon- separated list. If a compiled magic file is found alongside a file or directory, it will be used instead. -N, --no-pad Don't pad filenames so that they align in the output. -n, --no-buffer Force stdout to be flushed after checking each file. This is only useful if checking a list of files. It is intended to be used by programs that want filetype output from a pipe. -p, --preserve-date On systems that support utime(3) or utimes(2), attempt to preserve the access time of files analyzed, to pretend that never read them. -P, --parameter name=value Set various parameter limits. Name Default Explanation bytes 1M max number of bytes to read from file elf_notes 256 max ELF notes processed elf_phnum 2K max ELF program sections processed elf_shnum 32K max ELF sections processed elf_shsize 128MB max ELF section size processed encoding 65K max number of bytes to determine encoding indir 50 recursion limit for indirect magic name 50 use count limit for name/use magic regex 8K length limit for regex searches -r, --raw Don't translate unprintable characters to \ooo. Normally translates unprintable characters to their octal representation. -s, --special-files Normally, only attempts to read and determine the type of argument files which stat(2) reports are ordinary files. This prevents problems, because reading special files may have peculiar consequences. Specifying the -s option causes to also read argument files which are block or character special files. This is useful for determining the filesystem types of the data in raw disk partitions, which are block special files. This option also causes to disregard the file size as reported by stat(2) since on some systems it reports a zero size for raw disk partitions. -S, --no-sandbox On systems where libseccomp (https://github.com/seccomp/libseccomp ) is available, the -S option disables sandboxing which is enabled by default. This option is needed for to execute external decompressing programs, i.e. when the -z option is specified and the built-in decompressors are not available. On systems where sandboxing is not available, this option has no effect. -v, --version Print the version of the program and exit. -z, --uncompress Try to look inside compressed files. -Z, --uncompress-noreport Try to look inside compressed files, but report information about the contents only not the compression. -0, --print0 Output a null character \0 after the end of the filename. Nice to cut(1) the output. This does not affect the separator, which is still printed. If this option is repeated more than once, then prints just the filename followed by a NUL followed by the description (or ERROR: text) followed by a second NUL for each entry. --help Print a help message and exit. ENVIRONMENT top The environment variable MAGIC can be used to set the default magic file name. If that variable is set, then will not attempt to open $HOME/.magic. adds .mgc to the value of this variable as appropriate. The environment variable POSIXLY_CORRECT controls (on systems that support symbolic links), whether will attempt to follow symlinks or not. If set, then follows symlink, otherwise it does not. This is also controlled by the -L and -h options. FILES top /usr/local/share/misc/magic.mgc Default compiled list of magic. /usr/local/share/misc/magic Directory containing default magic files. EXIT STATUS top will exit with 0 if the operation was successful or >0 if an error was encountered. The following errors cause diagnostic messages, but don't affect the program exit code (as POSIX requires), unless -E is specified: A file cannot be found There is no permission to read a file The file type cannot be determined EXAMPLES top $ file file.c file /dev/{wd0a,hda} file.c: C program text file: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), stripped /dev/wd0a: block special (0/0) /dev/hda: block special (3/0) $ file -s /dev/wd0{b,d} /dev/wd0b: data /dev/wd0d: x86 boot sector $ file -s /dev/hda{,1,2,3,4,5,6,7,8,9,10} /dev/hda: x86 boot sector /dev/hda1: Linux/i386 ext2 filesystem /dev/hda2: x86 boot sector /dev/hda3: x86 boot sector, extended partition table /dev/hda4: Linux/i386 ext2 filesystem /dev/hda5: Linux/i386 swap file /dev/hda6: Linux/i386 swap file /dev/hda7: Linux/i386 swap file /dev/hda8: Linux/i386 swap file /dev/hda9: empty /dev/hda10: empty $ file -i file.c file /dev/{wd0a,hda} file.c: text/x-c file: application/x-executable /dev/hda: application/x-not-regular-file /dev/wd0a: application/x-not-regular-file SEE ALSO top hexdump(1), od(1), strings(1), magic(4) STANDARDS CONFORMANCE top This program is believed to exceed the System V Interface Definition of FILE(CMD), as near as one can determine from the vague language contained therein. Its behavior is mostly compatible with the System V program of the same name. This version knows more magic, however, so it will produce different (albeit more accurate) output in many cases. The one significant difference between this version and System V is that this version treats any white space as a delimiter, so that spaces in pattern strings must be escaped. For example, >10 string language impress (imPRESS data) in an existing magic file would have to be changed to >10 string language\ impress (imPRESS data) In addition, in this version, if a pattern string contains a backslash, it must be escaped. For example 0 string \begindata Andrew Toolkit document in an existing magic file would have to be changed to 0 string \\begindata Andrew Toolkit document SunOS releases 3.2 and later from Sun Microsystems include a command derived from the System V one, but with some extensions. This version differs from Sun's only in minor ways. It includes the extension of the & operator, used as, for example, >16 long&0x7fffffff >0 not stripped SECURITY top On systems where libseccomp (https://github.com/seccomp/libseccomp ) is available, is enforces limiting system calls to only the ones necessary for the operation of the program. This enforcement does not provide any security benefit when is asked to decompress input files running external programs with the -z option. To enable execution of external decompressors, one needs to disable sandboxing using the -S option. MAGIC DIRECTORY top The magic file entries have been collected from various sources, mainly USENET, and contributed by various authors. Christos Zoulas (address below) will collect additional or corrected magic file entries. A consolidation of magic file entries will be distributed periodically. The order of entries in the magic file is significant. Depending on what system you are using, the order that they are put together may be incorrect. If your old command uses a magic file, keep the old magic file around for comparison purposes (rename it to /usr/local/share/misc/magic.orig). HISTORY top There has been a command in every UNIX since at least Research Version 4 (man page dated November, 1973). The System V version introduced one significant major change: the external list of magic types. This slowed the program down slightly but made it a lot more flexible. This program, based on the System V version, was written by Ian Darwin ian@darwinsys.com without looking at anybody else's source code. John Gilmore revised the code extensively, making it better than the first version. Geoff Collyer found several inadequacies and provided some magic file entries. Contributions of the & operator by Rob McMahon, cudcv@warwick.ac.uk, 1989. Guy Harris, guy@netapp.com, made many changes from 1993 to the present. Primary development and maintenance from 1990 to the present by Christos Zoulas christos@astron.com. Altered by Chris Lowth chris@lowth.com, 2000: handle the -i option to output mime type strings, using an alternative magic file and internal logic. Altered by Eric Fischer enf@pobox.com, July, 2000, to identify character codes and attempt to identify the languages of non- ASCII files. Altered by Reuben Thomas rrt@sc3d.org, 2007-2011, to improve MIME support, merge MIME and non-MIME magic, support directories as well as files of magic, apply many bug fixes, update and fix a lot of magic, improve the build system, improve the documentation, and rewrite the Python bindings in pure Python. The list of contributors to the magic directory (magic files) is too long to include here. You know who you are; thank you. Many contributors are listed in the source files. LEGAL NOTICE top Copyright (c) Ian F. Darwin, Toronto, Canada, 1986-1999. Covered by the standard Berkeley Software Distribution copyright; see the file COPYING in the source distribution. The files tar.h and is_tar.c were written by John Gilmore from his public-domain tar(1) program, and are not covered by the above license. BUGS top Please report bugs and send patches to the bug tracker at https://bugs.astron.com/ or the mailing list at file@astron.com (visit https://mailman.astron.com/mailman/listinfo/file first to subscribe). TODO top Fix output so that tests for MIME and APPLE flags are not needed all over the place, and actual output is only done in one place. This needs a design. Suggestion: push possible outputs on to a list, then pick the last-pushed (most specific, one hopes) value at the end, or use a default if the list is empty. This should not slow down evaluation. The handling of MAGIC_CONTINUE and printing \012- between entries is clumsy and complicated; refactor and centralize. Some of the encoding logic is hard-coded in encoding.c and can be moved to the magic files if we had a !:charset annotation. Continue to squash all magic bugs. See Debian BTS for a good source. Store arbitrarily long strings, for example for %s patterns, so that they can be printed out. Fixes Debian bug #271672. This can be done by allocating strings in a string pool, storing the string pool at the end of the magic file and converting all the string pointers to relative offsets from the string pool. Add syntax for relative offsets after current level (Debian bug #466037). Make file -ki work, i.e. give multiple MIME types. Add a zip library so we can peek inside Office2007 documents to print more details about their contents. Add an option to print URLs for the sources of the file descriptions. Combine script searches and add a way to map executable names to MIME types (e.g. have a magic value for !:mime which causes the resulting string to be looked up in a table). This would avoid adding the same magic repeatedly for each new hash-bang interpreter. When a file descriptor is available, we can skip and adjust the buffer instead of the hacky buffer management we do now. Fix name and use to check for consistency at compile time (duplicate name, use pointing to undefined name ). Make name / use more efficient by keeping a sorted list of names. Special-case ^ to flip endianness in the parser so that it does not have to be escaped, and document it. If the offsets specified internally in the file exceed the buffer size ( HOWMANY variable in file.h), then we don't seek to that offset, but we give up. It would be better if buffer managements was done when the file descriptor is available so we can seek around the file. One must be careful though because this has performance and thus security considerations, because one can slow down things by repeatedly seeking. There is support now for keeping separate buffers and having offsets from the end of the file, but the internal buffer management still needs an overhaul. AVAILABILITY top You can obtain the original author's latest version by anonymous FTP on ftp.astron.com in the directory /pub/file/file-X.YZ.tar.gz. COLOPHON top This page is part of the file (a file type guesser) project. Information about the project can be found at http://www.darwinsys.com/file/. If you have a bug report for this manual page, see http://bugs.gw.com/my_view_page.php. This page was obtained from the project's upstream Git read-only mirror of the CVS repository https://github.com/glensc/file on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-21.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU May 21, 2023 FILE(1) Pages that refer to this page: dh_installmanpages(1), dh_strip(1), ippeveprinter(1), pmcd(1), scr_dump(5), term(5), suffixes(7), symlink(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# file\n\n> Determine file type.\n> More information: <https://manned.org/file>.\n\n- Give a description of the type of the specified file. Works fine for files with no file extension:\n\n`file {{path/to/file}}`\n\n- Look inside a zipped file and determine the file type(s) inside:\n\n`file -z {{foo.zip}}`\n\n- Allow file to work with special or device files:\n\n`file -s {{path/to/file}}`\n\n- Don't stop at first file type match; keep going until the end of the file:\n\n`file -k {{path/to/file}}`\n\n- Determine the MIME encoding type of a file:\n\n`file -i {{path/to/file}}`\n
filefrag
filefrag(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training filefrag(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | AUTHOR | COLOPHON FILEFRAG(8) System Manager's Manual FILEFRAG(8) NAME top filefrag - report on file fragmentation SYNOPSIS top filefrag [ -bblocksize ] [ -BeEkPsvVxX ] [ files... ] DESCRIPTION top filefrag reports on how badly fragmented a particular file might be. It makes allowances for indirect blocks for ext2 and ext3 file systems, but can be used on files for any file system. The filefrag program initially attempts to get the extent information using FIEMAP ioctl which is more efficient and faster. If FIEMAP is not supported then filefrag will fall back to using FIBMAP. OPTIONS top -B Force the use of the older FIBMAP ioctl instead of the FIEMAP ioctl for testing purposes. -bblocksize Use blocksize in bytes, or with [KMG] suffix, up to 1GB for output instead of the file system blocksize. For compatibility with earlier versions of filefrag, if blocksize is unspecified it defaults to 1024 bytes. Since blocksize is an optional argument, it must be added without any space after -b. -e Print output in extent format, even for block-mapped files. -E Display the contents of ext4's extent status cache. This feature is not supported on all kernels, and is only supported on ext4 file systems. -k Use 1024-byte blocksize for output (identical to '-b1024'). -P Pre-load the ext4 extent status cache for the file. This is not supported on all kernels, and is only supported on ext4 file systems. -s Sync the file before requesting the mapping. -v Be verbose when checking for file fragmentation. -V Print version number of program and library. If given twice, also print the FIEMAP flags that are understood by the current version. -x Display mapping of extended attributes. -X Display extent block numbers in hexadecimal format. AUTHOR top filefrag was written by Theodore Ts'o <tytso@mit.edu>. COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 FILEFRAG(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# filefrag\n\n> Report how badly fragmented a particular file might be.\n> More information: <https://manned.org/filefrag>.\n\n- Display a report for one or more files:\n\n`filefrag {{path/to/file1 path/to/file2 ...}}`\n\n- Display a report using a 1024 byte blocksize:\n\n`filefrag -b {{path/to/file}}`\n\n- Sync the file before requesting the mapping:\n\n`filefrag -s {{path/to/file1 path/to/file2 ...}}`\n\n- Display mapping of extended attributes:\n\n`filefrag -x {{path/to/file1 path/to/file2 ...}}`\n\n- Display a report with verbose information:\n\n`filefrag -v {{path/to/file1 path/to/file2 ...}}`\n
find
find(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training find(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXPRESSION | UNUSUAL FILENAMES | STANDARDS CONFORMANCE | ENVIRONMENT VARIABLES | EXAMPLES | EXIT STATUS | HISTORY | COMPATIBILITY | NON-BUGS | BUGS | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON FIND(1) General Commands Manual FIND(1) NAME top find - search for files in a directory hierarchy SYNOPSIS top find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression] DESCRIPTION top This manual page documents the GNU version of find. GNU find searches the directory tree rooted at each given starting-point by evaluating the given expression from left to right, according to the rules of precedence (see section OPERATORS), until the outcome is known (the left hand side is false for and operations, true for or), at which point find moves on to the next file name. If no starting-point is specified, `.' is assumed. If you are using find in an environment where security is important (for example if you are using it to search directories that are writable by other users), you should read the `Security Considerations' chapter of the findutils documentation, which is called Finding Files and comes with findutils. That document also includes a lot more detail and discussion than this manual page, so you may find it a more useful source of information. OPTIONS top The -H, -L and -P options control the treatment of symbolic links. Command-line arguments following these are taken to be names of files or directories to be examined, up to the first argument that begins with `-', or the argument `(' or `!'. That argument and any following arguments are taken to be the expression describing what is to be searched for. If no paths are given, the current directory is used. If no expression is given, the expression -print is used (but you should probably consider using -print0 instead, anyway). This manual page talks about `options' within the expression list. These options control the behaviour of find but are specified immediately after the last path name. The five `real' options -H, -L, -P, -D and -O must appear before the first path name, if at all. A double dash -- could theoretically be used to signal that any remaining arguments are not options, but this does not really work due to the way find determines the end of the following path arguments: it does that by reading until an expression argument comes (which also starts with a `-'). Now, if a path argument would start with a `-', then find would treat it as expression argument instead. Thus, to ensure that all start points are taken as such, and especially to prevent that wildcard patterns expanded by the calling shell are not mistakenly treated as expression arguments, it is generally safer to prefix wildcards or dubious path names with either `./' or to use absolute path names starting with '/'. Alternatively, it is generally safe though non-portable to use the GNU option -files0-from to pass arbitrary starting points to find. -P Never follow symbolic links. This is the default behaviour. When find examines or prints information about files, and the file is a symbolic link, the information used shall be taken from the properties of the symbolic link itself. -L Follow symbolic links. When find examines or prints information about files, the information used shall be taken from the properties of the file to which the link points, not from the link itself (unless it is a broken symbolic link or find is unable to examine the file to which the link points). Use of this option implies -noleaf. If you later use the -P option, -noleaf will still be in effect. If -L is in effect and find discovers a symbolic link to a subdirectory during its search, the subdirectory pointed to by the symbolic link will be searched. When the -L option is in effect, the -type predicate will always match against the type of the file that a symbolic link points to rather than the link itself (unless the symbolic link is broken). Actions that can cause symbolic links to become broken while find is executing (for example -delete) can give rise to confusing behaviour. Using -L causes the -lname and -ilname predicates always to return false. -H Do not follow symbolic links, except while processing the command line arguments. When find examines or prints information about files, the information used shall be taken from the properties of the symbolic link itself. The only exception to this behaviour is when a file specified on the command line is a symbolic link, and the link can be resolved. For that situation, the information used is taken from whatever the link points to (that is, the link is followed). The information about the link itself is used as a fallback if the file pointed to by the symbolic link cannot be examined. If -H is in effect and one of the paths specified on the command line is a symbolic link to a directory, the contents of that directory will be examined (though of course -maxdepth 0 would prevent this). If more than one of -H, -L and -P is specified, each overrides the others; the last one appearing on the command line takes effect. Since it is the default, the -P option should be considered to be in effect unless either -H or -L is specified. GNU find frequently stats files during the processing of the command line itself, before any searching has begun. These options also affect how those arguments are processed. Specifically, there are a number of tests that compare files listed on the command line against a file we are currently considering. In each case, the file specified on the command line will have been examined and some of its properties will have been saved. If the named file is in fact a symbolic link, and the -P option is in effect (or if neither -H nor -L were specified), the information used for the comparison will be taken from the properties of the symbolic link. Otherwise, it will be taken from the properties of the file the link points to. If find cannot follow the link (for example because it has insufficient privileges or the link points to a nonexistent file) the properties of the link itself will be used. When the -H or -L options are in effect, any symbolic links listed as the argument of -newer will be dereferenced, and the timestamp will be taken from the file to which the symbolic link points. The same consideration applies to -newerXY, -anewer and -cnewer. The -follow option has a similar effect to -L, though it takes effect at the point where it appears (that is, if -L is not used but -follow is, any symbolic links appearing after -follow on the command line will be dereferenced, and those before it will not). -D debugopts Print diagnostic information; this can be helpful to diagnose problems with why find is not doing what you want. The list of debug options should be comma separated. Compatibility of the debug options is not guaranteed between releases of findutils. For a complete list of valid debug options, see the output of find -D help. Valid debug options include exec Show diagnostic information relating to -exec, -execdir, -ok and -okdir opt Prints diagnostic information relating to the optimisation of the expression tree; see the -O option. rates Prints a summary indicating how often each predicate succeeded or failed. search Navigate the directory tree verbosely. stat Print messages as files are examined with the stat and lstat system calls. The find program tries to minimise such calls. tree Show the expression tree in its original and optimised form. all Enable all of the other debug options (but help). help Explain the debugging options. -Olevel Enables query optimisation. The find program reorders tests to speed up execution while preserving the overall effect; that is, predicates with side effects are not reordered relative to each other. The optimisations performed at each optimisation level are as follows. 0 Equivalent to optimisation level 1. 1 This is the default optimisation level and corresponds to the traditional behaviour. Expressions are reordered so that tests based only on the names of files (for example -name and -regex) are performed first. 2 Any -type or -xtype tests are performed after any tests based only on the names of files, but before any tests that require information from the inode. On many modern versions of Unix, file types are returned by readdir() and so these predicates are faster to evaluate than predicates which need to stat the file first. If you use the -fstype FOO predicate and specify a filesystem type FOO which is not known (that is, present in `/etc/mtab') at the time find starts, that predicate is equivalent to -false. 3 At this optimisation level, the full cost-based query optimiser is enabled. The order of tests is modified so that cheap (i.e. fast) tests are performed first and more expensive ones are performed later, if necessary. Within each cost band, predicates are evaluated earlier or later according to whether they are likely to succeed or not. For -o, predicates which are likely to succeed are evaluated earlier, and for -a, predicates which are likely to fail are evaluated earlier. The cost-based optimiser has a fixed idea of how likely any given test is to succeed. In some cases the probability takes account of the specific nature of the test (for example, -type f is assumed to be more likely to succeed than -type c). The cost-based optimiser is currently being evaluated. If it does not actually improve the performance of find, it will be removed again. Conversely, optimisations that prove to be reliable, robust and effective may be enabled at lower optimisation levels over time. However, the default behaviour (i.e. optimisation level 1) will not be changed in the 4.3.x release series. The findutils test suite runs all the tests on find at each optimisation level and ensures that the result is the same. EXPRESSION top The part of the command line after the list of starting points is the expression. This is a kind of query specification describing how we match files and what we do with the files that were matched. An expression is composed of a sequence of things: Tests Tests return a true or false value, usually on the basis of some property of a file we are considering. The -empty test for example is true only when the current file is empty. Actions Actions have side effects (such as printing something on the standard output) and return either true or false, usually based on whether or not they are successful. The -print action for example prints the name of the current file on the standard output. Global options Global options affect the operation of tests and actions specified on any part of the command line. Global options always return true. The -depth option for example makes find traverse the file system in a depth-first order. Positional options Positional options affect only tests or actions which follow them. Positional options always return true. The -regextype option for example is positional, specifying the regular expression dialect for regular expressions occurring later on the command line. Operators Operators join together the other items within the expression. They include for example -o (meaning logical OR) and -a (meaning logical AND). Where an operator is missing, -a is assumed. The -print action is performed on all files for which the whole expression is true, unless it contains an action other than -prune or -quit. Actions which inhibit the default -print are -delete, -exec, -execdir, -ok, -okdir, -fls, -fprint, -fprintf, -ls, -print and -printf. The -delete action also acts like an option (since it implies -depth). POSITIONAL OPTIONS Positional options always return true. They affect only tests occurring later on the command line. -daystart Measure times (for -amin, -atime, -cmin, -ctime, -mmin, and -mtime) from the beginning of today rather than from 24 hours ago. This option only affects tests which appear later on the command line. -follow Deprecated; use the -L option instead. Dereference symbolic links. Implies -noleaf. The -follow option affects only those tests which appear after it on the command line. Unless the -H or -L option has been specified, the position of the -follow option changes the behaviour of the -newer predicate; any files listed as the argument of -newer will be dereferenced if they are symbolic links. The same consideration applies to -newerXY, -anewer and -cnewer. Similarly, the -type predicate will always match against the type of the file that a symbolic link points to rather than the link itself. Using -follow causes the -lname and -ilname predicates always to return false. -regextype type Changes the regular expression syntax understood by -regex and -iregex tests which occur later on the command line. To see which regular expression types are known, use -regextype help. The Texinfo documentation (see SEE ALSO) explains the meaning of and differences between the various types of regular expression. -warn, -nowarn Turn warning messages on or off. These warnings apply only to the command line usage, not to any conditions that find might encounter when it searches directories. The default behaviour corresponds to -warn if standard input is a tty, and to -nowarn otherwise. If a warning message relating to command-line usage is produced, the exit status of find is not affected. If the POSIXLY_CORRECT environment variable is set, and -warn is also used, it is not specified which, if any, warnings will be active. GLOBAL OPTIONS Global options always return true. Global options take effect even for tests which occur earlier on the command line. To prevent confusion, global options should be specified on the command-line after the list of start points, just before the first test, positional option or action. If you specify a global option in some other place, find will issue a warning message explaining that this can be confusing. The global options occur after the list of start points, and so are not the same kind of option as -L, for example. -d A synonym for -depth, for compatibility with FreeBSD, NetBSD, MacOS X and OpenBSD. -depth Process each directory's contents before the directory itself. The -delete action also implies -depth. -files0-from file Read the starting points from file instead of getting them on the command line. In contrast to the known limitations of passing starting points via arguments on the command line, namely the limitation of the amount of file names, and the inherent ambiguity of file names clashing with option names, using this option allows to safely pass an arbitrary number of starting points to find. Using this option and passing starting points on the command line is mutually exclusive, and is therefore not allowed at the same time. The file argument is mandatory. One can use -files0-from - to read the list of starting points from the standard input stream, and e.g. from a pipe. In this case, the actions -ok and -okdir are not allowed, because they would obviously interfere with reading from standard input in order to get a user confirmation. The starting points in file have to be separated by ASCII NUL characters. Two consecutive NUL characters, i.e., a starting point with a Zero-length file name is not allowed and will lead to an error diagnostic followed by a non- Zero exit code later. In the case the given file is empty, find does not process any starting point and therefore will exit immediately after parsing the program arguments. This is unlike the standard invocation where find assumes the current directory as starting point if no path argument is passed. The processing of the starting points is otherwise as usual, e.g. find will recurse into subdirectories unless otherwise prevented. To process only the starting points, one can additionally pass -maxdepth 0. Further notes: if a file is listed more than once in the input file, it is unspecified whether it is visited more than once. If the file is mutated during the operation of find, the result is unspecified as well. Finally, the seek position within the named file at the time find exits, be it with -quit or in any other way, is also unspecified. By "unspecified" here is meant that it may or may not work or do any specific thing, and that the behavior may change from platform to platform, or from findutils release to release. -help, --help Print a summary of the command-line usage of find and exit. -ignore_readdir_race Normally, find will emit an error message when it fails to stat a file. If you give this option and a file is deleted between the time find reads the name of the file from the directory and the time it tries to stat the file, no error message will be issued. This also applies to files or directories whose names are given on the command line. This option takes effect at the time the command line is read, which means that you cannot search one part of the filesystem with this option on and part of it with this option off (if you need to do that, you will need to issue two find commands instead, one with the option and one without it). Furthermore, find with the -ignore_readdir_race option will ignore errors of the -delete action in the case the file has disappeared since the parent directory was read: it will not output an error diagnostic, and the return code of the -delete action will be true. -maxdepth levels Descend at most levels (a non-negative integer) levels of directories below the starting-points. Using -maxdepth 0 means only apply the tests and actions to the starting- points themselves. -mindepth levels Do not apply any tests or actions at levels less than levels (a non-negative integer). Using -mindepth 1 means process all files except the starting-points. -mount Don't descend directories on other filesystems. An alternate name for -xdev, for compatibility with some other versions of find. -noignore_readdir_race Turns off the effect of -ignore_readdir_race. -noleaf Do not optimize by assuming that directories contain 2 fewer subdirectories than their hard link count. This option is needed when searching filesystems that do not follow the Unix directory-link convention, such as CD-ROM or MS-DOS filesystems or AFS volume mount points. Each directory on a normal Unix filesystem has at least 2 hard links: its name and its `.' entry. Additionally, its subdirectories (if any) each have a `..' entry linked to that directory. When find is examining a directory, after it has statted 2 fewer subdirectories than the directory's link count, it knows that the rest of the entries in the directory are non-directories (`leaf' files in the directory tree). If only the files' names need to be examined, there is no need to stat them; this gives a significant increase in search speed. -version, --version Print the find version number and exit. -xdev Don't descend directories on other filesystems. TESTS Some tests, for example -newerXY and -samefile, allow comparison between the file currently being examined and some reference file specified on the command line. When these tests are used, the interpretation of the reference file is determined by the options -H, -L and -P and any previous -follow, but the reference file is only examined once, at the time the command line is parsed. If the reference file cannot be examined (for example, the stat(2) system call fails for it), an error message is issued, and find exits with a nonzero status. A numeric argument n can be specified to tests (like -amin, -mtime, -gid, -inum, -links, -size, -uid and -used) as +n for greater than n, -n for less than n, n for exactly n. Supported tests: -amin n File was last accessed less than, more than or exactly n minutes ago. -anewer reference Time of the last access of the current file is more recent than that of the last data modification of the reference file. If reference is a symbolic link and the -H option or the -L option is in effect, then the time of the last data modification of the file it points to is always used. -atime n File was last accessed less than, more than or exactly n*24 hours ago. When find figures out how many 24-hour periods ago the file was last accessed, any fractional part is ignored, so to match -atime +1, a file has to have been accessed at least two days ago. -cmin n File's status was last changed less than, more than or exactly n minutes ago. -cnewer reference Time of the last status change of the current file is more recent than that of the last data modification of the reference file. If reference is a symbolic link and the -H option or the -L option is in effect, then the time of the last data modification of the file it points to is always used. -ctime n File's status was last changed less than, more than or exactly n*24 hours ago. See the comments for -atime to understand how rounding affects the interpretation of file status change times. -empty File is empty and is either a regular file or a directory. -executable Matches files which are executable and directories which are searchable (in a file name resolution sense) by the current user. This takes into account access control lists and other permissions artefacts which the -perm test ignores. This test makes use of the access(2) system call, and so can be fooled by NFS servers which do UID mapping (or root-squashing), since many systems implement access(2) in the client's kernel and so cannot make use of the UID mapping information held on the server. Because this test is based only on the result of the access(2) system call, there is no guarantee that a file for which this test succeeds can actually be executed. -false Always false. -fstype type File is on a filesystem of type type. The valid filesystem types vary among different versions of Unix; an incomplete list of filesystem types that are accepted on some version of Unix or another is: ufs, 4.2, 4.3, nfs, tmp, mfs, S51K, S52K. You can use -printf with the %F directive to see the types of your filesystems. -gid n File's numeric group ID is less than, more than or exactly n. -group gname File belongs to group gname (numeric group ID allowed). -ilname pattern Like -lname, but the match is case insensitive. If the -L option or the -follow option is in effect, this test returns false unless the symbolic link is broken. -iname pattern Like -name, but the match is case insensitive. For example, the patterns `fo*' and `F??' match the file names `Foo', `FOO', `foo', `fOo', etc. The pattern `*foo*` will also match a file called '.foobar'. -inum n File has inode number smaller than, greater than or exactly n. It is normally easier to use the -samefile test instead. -ipath pattern Like -path. but the match is case insensitive. -iregex pattern Like -regex, but the match is case insensitive. -iwholename pattern See -ipath. This alternative is less portable than -ipath. -links n File has less than, more than or exactly n hard links. -lname pattern File is a symbolic link whose contents match shell pattern pattern. The metacharacters do not treat `/' or `.' specially. If the -L option or the -follow option is in effect, this test returns false unless the symbolic link is broken. -mmin n File's data was last modified less than, more than or exactly n minutes ago. -mtime n File's data was last modified less than, more than or exactly n*24 hours ago. See the comments for -atime to understand how rounding affects the interpretation of file modification times. -name pattern Base of file name (the path with the leading directories removed) matches shell pattern pattern. Because the leading directories of the file names are removed, the pattern should not include a slash, because `-name a/b' will never match anything (and you probably want to use -path instead). An exception to this is when using only a slash as pattern (`-name /'), because that is a valid string for matching the root directory "/" (because the base name of "/" is "/"). A warning is issued if you try to pass a pattern containing a - but not consisting solely of one - slash, unless the environment variable POSIXLY_CORRECT is set or the option -nowarn is used. To ignore a directory and the files under it, use -prune rather than checking every file in the tree; see an example in the description of that action. Braces are not recognised as being special, despite the fact that some shells including Bash imbue braces with a special meaning in shell patterns. The filename matching is performed with the use of the fnmatch(3) library function. Don't forget to enclose the pattern in quotes in order to protect it from expansion by the shell. -newer reference Time of the last data modification of the current file is more recent than that of the last data modification of the reference file. If reference is a symbolic link and the -H option or the -L option is in effect, then the time of the last data modification of the file it points to is always used. -newerXY reference Succeeds if timestamp X of the file being considered is newer than timestamp Y of the file reference. The letters X and Y can be any of the following letters: a The access time of the file reference B The birth time of the file reference c The inode status change time of reference m The modification time of the file reference t reference is interpreted directly as a time Some combinations are invalid; for example, it is invalid for X to be t. Some combinations are not implemented on all systems; for example B is not supported on all systems. If an invalid or unsupported combination of XY is specified, a fatal error results. Time specifications are interpreted as for the argument to the -d option of GNU date. If you try to use the birth time of a reference file, and the birth time cannot be determined, a fatal error message results. If you specify a test which refers to the birth time of files being examined, this test will fail for any files where the birth time is unknown. -nogroup No group corresponds to file's numeric group ID. -nouser No user corresponds to file's numeric user ID. -path pattern File name matches shell pattern pattern. The metacharacters do not treat `/' or `.' specially; so, for example, find . -path "./sr*sc" will print an entry for a directory called ./src/misc (if one exists). To ignore a whole directory tree, use -prune rather than checking every file in the tree. Note that the pattern match test applies to the whole file name, starting from one of the start points named on the command line. It would only make sense to use an absolute path name here if the relevant start point is also an absolute path. This means that this command will never match anything: find bar -path /foo/bar/myfile -print Find compares the -path argument with the concatenation of a directory name and the base name of the file it's examining. Since the concatenation will never end with a slash, -path arguments ending in a slash will match nothing (except perhaps a start point specified on the command line). The predicate -path is also supported by HP-UX find and is part of the POSIX 2008 standard. -perm mode File's permission bits are exactly mode (octal or symbolic). Since an exact match is required, if you want to use this form for symbolic modes, you may have to specify a rather complex mode string. For example `-perm g=w' will only match files which have mode 0020 (that is, ones for which group write permission is the only permission set). It is more likely that you will want to use the `/' or `-' forms, for example `-perm -g=w', which matches any file with group write permission. See the EXAMPLES section for some illustrative examples. -perm -mode All of the permission bits mode are set for the file. Symbolic modes are accepted in this form, and this is usually the way in which you would want to use them. You must specify `u', `g' or `o' if you use a symbolic mode. See the EXAMPLES section for some illustrative examples. -perm /mode Any of the permission bits mode are set for the file. Symbolic modes are accepted in this form. You must specify `u', `g' or `o' if you use a symbolic mode. See the EXAMPLES section for some illustrative examples. If no permission bits in mode are set, this test matches any file (the idea here is to be consistent with the behaviour of -perm -000). -perm +mode This is no longer supported (and has been deprecated since 2005). Use -perm /mode instead. -readable Matches files which are readable by the current user. This takes into account access control lists and other permissions artefacts which the -perm test ignores. This test makes use of the access(2) system call, and so can be fooled by NFS servers which do UID mapping (or root- squashing), since many systems implement access(2) in the client's kernel and so cannot make use of the UID mapping information held on the server. -regex pattern File name matches regular expression pattern. This is a match on the whole path, not a search. For example, to match a file named ./fubar3, you can use the regular expression `.*bar.' or `.*b.*3', but not `f.*r3'. The regular expressions understood by find are by default Emacs Regular Expressions (except that `.' matches newline), but this can be changed with the -regextype option. -samefile name File refers to the same inode as name. When -L is in effect, this can include symbolic links. -size n[cwbkMG] File uses less than, more than or exactly n units of space, rounding up. The following suffixes can be used: `b' for 512-byte blocks (this is the default if no suffix is used) `c' for bytes `w' for two-byte words `k' for kibibytes (KiB, units of 1024 bytes) `M' for mebibytes (MiB, units of 1024 * 1024 = 1048576 bytes) `G' for gibibytes (GiB, units of 1024 * 1024 * 1024 = 1073741824 bytes) The size is simply the st_size member of the struct stat populated by the lstat (or stat) system call, rounded up as shown above. In other words, it's consistent with the result you get for ls -l. Bear in mind that the `%k' and `%b' format specifiers of -printf handle sparse files differently. The `b' suffix always denotes 512-byte blocks and never 1024-byte blocks, which is different to the behaviour of -ls. The + and - prefixes signify greater than and less than, as usual; i.e., an exact size of n units does not match. Bear in mind that the size is rounded up to the next unit. Therefore -size -1M is not equivalent to -size -1048576c. The former only matches empty files, the latter matches files from 0 to 1,048,575 bytes. -true Always true. -type c File is of type c: b block (buffered) special c character (unbuffered) special d directory p named pipe (FIFO) f regular file l symbolic link; this is never true if the -L option or the -follow option is in effect, unless the symbolic link is broken. If you want to search for symbolic links when -L is in effect, use -xtype. s socket D door (Solaris) To search for more than one type at once, you can supply the combined list of type letters separated by a comma `,' (GNU extension). -uid n File's numeric user ID is less than, more than or exactly n. -used n File was last accessed less than, more than or exactly n days after its status was last changed. -user uname File is owned by user uname (numeric user ID allowed). -wholename pattern See -path. This alternative is less portable than -path. -writable Matches files which are writable by the current user. This takes into account access control lists and other permissions artefacts which the -perm test ignores. This test makes use of the access(2) system call, and so can be fooled by NFS servers which do UID mapping (or root- squashing), since many systems implement access(2) in the client's kernel and so cannot make use of the UID mapping information held on the server. -xtype c The same as -type unless the file is a symbolic link. For symbolic links: if the -H or -P option was specified, true if the file is a link to a file of type c; if the -L option has been given, true if c is `l'. In other words, for symbolic links, -xtype checks the type of the file that -type does not check. -context pattern (SELinux only) Security context of the file matches glob pattern. ACTIONS -delete Delete files or directories; true if removal succeeded. If the removal failed, an error message is issued and find's exit status will be nonzero (when it eventually exits). Warning: Don't forget that find evaluates the command line as an expression, so putting -delete first will make find try to delete everything below the starting points you specified. The use of the -delete action on the command line automatically turns on the -depth option. As in turn -depth makes -prune ineffective, the -delete action cannot usefully be combined with -prune. Often, the user might want to test a find command line with -print prior to adding -delete for the actual removal run. To avoid surprising results, it is usually best to remember to use -depth explicitly during those earlier test runs. The -delete action will fail to remove a directory unless it is empty. Together with the -ignore_readdir_race option, find will ignore errors of the -delete action in the case the file has disappeared since the parent directory was read: it will not output an error diagnostic, not change the exit code to nonzero, and the return code of the -delete action will be true. -exec command ; Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;' is encountered. The string `{}' is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find. Both of these constructions might need to be escaped (with a `\') or quoted to protect them from expansion by the shell. See the EXAMPLES section for examples of the use of the -exec option. The specified command is run once for each matched file. The command is executed in the starting directory. There are unavoidable security problems surrounding use of the -exec action; you should use the -execdir option instead. -exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of `{}' is allowed within the command, and it must appear at the end, immediately before the `+'; it needs to be escaped (with a `\') or quoted to protect it from interpretation by the shell. The command is executed in the starting directory. If any invocation with the `+' form returns a non-zero value as exit status, then find returns a non-zero exit status. If find encounters an error, this can sometimes cause an immediate exit, so some pending commands may not be run at all. For this reason -exec my- command ... {} + -quit may not result in my-command actually being run. This variant of -exec always returns true. -execdir command ; -execdir command {} + Like -exec, but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find. As with -exec, the {} should be quoted if find is being invoked from a shell. This a much more secure method for invoking commands, as it avoids race conditions during resolution of the paths to the matched files. As with the -exec action, the `+' form of -execdir will build a command line to process more than one matched file, but any given invocation of command will only list files that exist in the same subdirectory. If you use this option, you must ensure that your PATH environment variable does not reference `.'; otherwise, an attacker can run any commands they like by leaving an appropriately-named file in a directory in which you will run -execdir. The same applies to having entries in PATH which are empty or which are not absolute directory names. If any invocation with the `+' form returns a non-zero value as exit status, then find returns a non-zero exit status. If find encounters an error, this can sometimes cause an immediate exit, so some pending commands may not be run at all. The result of the action depends on whether the + or the ; variant is being used; -execdir command {} + always returns true, while -execdir command {} ; returns true only if command returns 0. -fls file True; like -ls but write to file like -fprint. The output file is always created, even if the predicate is never matched. See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. -fprint file True; print the full file name into file file. If file does not exist when find is run, it is created; if it does exist, it is truncated. The file names /dev/stdout and /dev/stderr are handled specially; they refer to the standard output and standard error output, respectively. The output file is always created, even if the predicate is never matched. See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. -fprint0 file True; like -print0 but write to file like -fprint. The output file is always created, even if the predicate is never matched. See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. -fprintf file format True; like -printf but write to file like -fprint. The output file is always created, even if the predicate is never matched. See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. -ls True; list current file in ls -dils format on standard output. The block counts are of 1 KB blocks, unless the environment variable POSIXLY_CORRECT is set, in which case 512-byte blocks are used. See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. -ok command ; Like -exec but ask the user first. If the user agrees, run the command. Otherwise just return false. If the command is run, its standard input is redirected from /dev/null. This action may not be specified together with the -files0-from option. The response to the prompt is matched against a pair of regular expressions to determine if it is an affirmative or negative response. This regular expression is obtained from the system if the POSIXLY_CORRECT environment variable is set, or otherwise from find's message translations. If the system has no suitable definition, find's own definition will be used. In either case, the interpretation of the regular expression itself will be affected by the environment variables LC_CTYPE (character classes) and LC_COLLATE (character ranges and equivalence classes). -okdir command ; Like -execdir but ask the user first in the same way as for -ok. If the user does not agree, just return false. If the command is run, its standard input is redirected from /dev/null. This action may not be specified together with the -files0-from option. -print True; print the full file name on the standard output, followed by a newline. If you are piping the output of find into another program and there is the faintest possibility that the files which you are searching for might contain a newline, then you should seriously consider using the -print0 option instead of -print. See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. -print0 True; print the full file name on the standard output, followed by a null character (instead of the newline character that -print uses). This allows file names that contain newlines or other types of white space to be correctly interpreted by programs that process the find output. This option corresponds to the -0 option of xargs. -printf format True; print format on the standard output, interpreting `\' escapes and `%' directives. Field widths and precisions can be specified as with the printf(3) C function. Please note that many of the fields are printed as %s rather than %d, and this may mean that flags don't work as you might expect. This also means that the `-' flag does work (it forces fields to be left-aligned). Unlike -print, -printf does not add a newline at the end of the string. The escapes and directives are: \a Alarm bell. \b Backspace. \c Stop printing from this format immediately and flush the output. \f Form feed. \n Newline. \r Carriage return. \t Horizontal tab. \v Vertical tab. \0 ASCII NUL. \\ A literal backslash (`\'). \NNN The character whose ASCII code is NNN (octal). A `\' character followed by any other character is treated as an ordinary character, so they both are printed. %% A literal percent sign. %a File's last access time in the format returned by the C ctime(3) function. %Ak File's last access time in the format specified by k, which is either `@' or a directive for the C strftime(3) function. The following shows an incomplete list of possible values for k. Please refer to the documentation of strftime(3) for the full list. Some of the conversion specification characters might not be available on all systems, due to differences in the implementation of the strftime(3) library function. @ seconds since Jan. 1, 1970, 00:00 GMT, with fractional part. Time fields: H hour (00..23) I hour (01..12) k hour ( 0..23) l hour ( 1..12) M minute (00..59) p locale's AM or PM r time, 12-hour (hh:mm:ss [AP]M) S Second (00.00 .. 61.00). There is a fractional part. T time, 24-hour (hh:mm:ss.xxxxxxxxxx) + Date and time, separated by `+', for example `2004-04-28+22:22:05.0'. This is a GNU extension. The time is given in the current timezone (which may be affected by setting the TZ environment variable). The seconds field includes a fractional part. X locale's time representation (H:M:S). The seconds field includes a fractional part. Z time zone (e.g., EDT), or nothing if no time zone is determinable Date fields: a locale's abbreviated weekday name (Sun..Sat) A locale's full weekday name, variable length (Sunday..Saturday) b locale's abbreviated month name (Jan..Dec) B locale's full month name, variable length (January..December) c locale's date and time (Sat Nov 04 12:02:33 EST 1989). The format is the same as for ctime(3) and so to preserve compatibility with that format, there is no fractional part in the seconds field. d day of month (01..31) D date (mm/dd/yy) F date (yyyy-mm-dd) h same as b j day of year (001..366) m month (01..12) U week number of year with Sunday as first day of week (00..53) w day of week (0..6) W week number of year with Monday as first day of week (00..53) x locale's date representation (mm/dd/yy) y last two digits of year (00..99) Y year (1970...) %b The amount of disk space used for this file in 512-byte blocks. Since disk space is allocated in multiples of the filesystem block size this is usually greater than %s/512, but it can also be smaller if the file is a sparse file. %Bk File's birth time, i.e., its creation time, in the format specified by k, which is the same as for %A. This directive produces an empty string if the underlying operating system or filesystem does not support birth times. %c File's last status change time in the format returned by the C ctime(3) function. %Ck File's last status change time in the format specified by k, which is the same as for %A. %d File's depth in the directory tree; 0 means the file is a starting-point. %D The device number on which the file exists (the st_dev field of struct stat), in decimal. %f Print the basename; the file's name with any leading directories removed (only the last element). For /, the result is `/'. See the EXAMPLES section for an example. %F Type of the filesystem the file is on; this value can be used for -fstype. %g File's group name, or numeric group ID if the group has no name. %G File's numeric group ID. %h Dirname; the Leading directories of the file's name (all but the last element). If the file name contains no slashes (since it is in the current directory) the %h specifier expands to `.'. For files which are themselves directories and contain a slash (including /), %h expands to the empty string. See the EXAMPLES section for an example. %H Starting-point under which file was found. %i File's inode number (in decimal). %k The amount of disk space used for this file in 1 KB blocks. Since disk space is allocated in multiples of the filesystem block size this is usually greater than %s/1024, but it can also be smaller if the file is a sparse file. %l Object of symbolic link (empty string if file is not a symbolic link). %m File's permission bits (in octal). This option uses the `traditional' numbers which most Unix implementations use, but if your particular implementation uses an unusual ordering of octal permissions bits, you will see a difference between the actual value of the file's mode and the output of %m. Normally you will want to have a leading zero on this number, and to do this, you should use the # flag (as in, for example, `%#m'). %M File's permissions (in symbolic form, as for ls). This directive is supported in findutils 4.2.5 and later. %n Number of hard links to file. %p File's name. %P File's name with the name of the starting-point under which it was found removed. %s File's size in bytes. %S File's sparseness. This is calculated as (BLOCKSIZE*st_blocks / st_size). The exact value you will get for an ordinary file of a certain length is system-dependent. However, normally sparse files will have values less than 1.0, and files which use indirect blocks may have a value which is greater than 1.0. In general the number of blocks used by a file is file system dependent. The value used for BLOCKSIZE is system-dependent, but is usually 512 bytes. If the file size is zero, the value printed is undefined. On systems which lack support for st_blocks, a file's sparseness is assumed to be 1.0. %t File's last modification time in the format returned by the C ctime(3) function. %Tk File's last modification time in the format specified by k, which is the same as for %A. %u File's user name, or numeric user ID if the user has no name. %U File's numeric user ID. %y File's type (like in ls -l), U=unknown type (shouldn't happen) %Y File's type (like %y), plus follow symbolic links: `L'=loop, `N'=nonexistent, `?' for any other error when determining the type of the target of a symbolic link. %Z (SELinux only) file's security context. %{ %[ %( Reserved for future use. A `%' character followed by any other character is discarded, but the other character is printed (don't rely on this, as further format characters may be introduced). A `%' at the end of the format argument causes undefined behaviour since there is no following character. In some locales, it may hide your door keys, while in others it may remove the final page from the novel you are reading. The %m and %d directives support the #, 0 and + flags, but the other directives do not, even if they print numbers. Numeric directives that do not support these flags include G, U, b, D, k and n. The `-' format flag is supported and changes the alignment of a field from right-justified (which is the default) to left-justified. See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. -prune True; if the file is a directory, do not descend into it. If -depth is given, then -prune has no effect. Because -delete implies -depth, you cannot usefully use -prune and -delete together. For example, to skip the directory src/emacs and all files and directories under it, and print the names of the other files found, do something like this: find . -path ./src/emacs -prune -o -print -quit Exit immediately (with return value zero if no errors have occurred). This is different to -prune because -prune only applies to the contents of pruned directories, while -quit simply makes find stop immediately. No child processes will be left running. Any command lines which have been built by -exec ... + or -execdir ... + are invoked before the program is exited. After -quit is executed, no more files specified on the command line will be processed. For example, `find /tmp/foo /tmp/bar -print -quit` will print only `/tmp/foo`. One common use of -quit is to stop searching the file system once we have found what we want. For example, if we want to find just a single file we can do this: find / -name needle -print -quit OPERATORS Listed in order of decreasing precedence: ( expr ) Force precedence. Since parentheses are special to the shell, you will normally need to quote them. Many of the examples in this manual page use backslashes for this purpose: `\(...\)' instead of `(...)'. ! expr True if expr is false. This character will also usually need protection from interpretation by the shell. -not expr Same as ! expr, but not POSIX compliant. expr1 expr2 Two expressions in a row are taken to be joined with an implied -a; expr2 is not evaluated if expr1 is false. expr1 -a expr2 Same as expr1 expr2. expr1 -and expr2 Same as expr1 expr2, but not POSIX compliant. expr1 -o expr2 Or; expr2 is not evaluated if expr1 is true. expr1 -or expr2 Same as expr1 -o expr2, but not POSIX compliant. expr1 , expr2 List; both expr1 and expr2 are always evaluated. The value of expr1 is discarded; the value of the list is the value of expr2. The comma operator can be useful for searching for several different types of thing, but traversing the filesystem hierarchy only once. The -fprintf action can be used to list the various matched items into several different output files. Please note that -a when specified implicitly (for example by two tests appearing without an explicit operator between them) or explicitly has higher precedence than -o. This means that find . -name afile -o -name bfile -print will never print afile. UNUSUAL FILENAMES top Many of the actions of find result in the printing of data which is under the control of other users. This includes file names, sizes, modification times and so forth. File names are a potential problem since they can contain any character except `\0' and `/'. Unusual characters in file names can do unexpected and often undesirable things to your terminal (for example, changing the settings of your function keys on some terminals). Unusual characters are handled differently by various actions, as described below. -print0, -fprint0 Always print the exact filename, unchanged, even if the output is going to a terminal. -ls, -fls Unusual characters are always escaped. White space, backslash, and double quote characters are printed using C-style escaping (for example `\f', `\"'). Other unusual characters are printed using an octal escape. Other printable characters (for -ls and -fls these are the characters between octal 041 and 0176) are printed as-is. -printf, -fprintf If the output is not going to a terminal, it is printed as-is. Otherwise, the result depends on which directive is in use. The directives %D, %F, %g, %G, %H, %Y, and %y expand to values which are not under control of files' owners, and so are printed as-is. The directives %a, %b, %c, %d, %i, %k, %m, %M, %n, %s, %t, %u and %U have values which are under the control of files' owners but which cannot be used to send arbitrary data to the terminal, and so these are printed as-is. The directives %f, %h, %l, %p and %P are quoted. This quoting is performed in the same way as for GNU ls. This is not the same quoting mechanism as the one used for -ls and -fls. If you are able to decide what format to use for the output of find then it is normally better to use `\0' as a terminator than to use newline, as file names can contain white space and newline characters. The setting of the LC_CTYPE environment variable is used to determine which characters need to be quoted. -print, -fprint Quoting is handled in the same way as for -printf and -fprintf. If you are using find in a script or in a situation where the matched files might have arbitrary names, you should consider using -print0 instead of -print. The -ok and -okdir actions print the current filename as-is. This may change in a future release. STANDARDS CONFORMANCE top For closest compliance to the POSIX standard, you should set the POSIXLY_CORRECT environment variable. The following options are specified in the POSIX standard (IEEE Std 1003.1-2008, 2016 Edition): -H This option is supported. -L This option is supported. -name This option is supported, but POSIX conformance depends on the POSIX conformance of the system's fnmatch(3) library function. As of findutils-4.2.2, shell metacharacters (`*', `?' or `[]' for example) match a leading `.', because IEEE PASC interpretation 126 requires this. This is a change from previous versions of findutils. -type Supported. POSIX specifies `b', `c', `d', `l', `p', `f' and `s'. GNU find also supports `D', representing a Door, where the OS provides these. Furthermore, GNU find allows multiple types to be specified at once in a comma- separated list. -ok Supported. Interpretation of the response is according to the `yes' and `no' patterns selected by setting the LC_MESSAGES environment variable. When the POSIXLY_CORRECT environment variable is set, these patterns are taken system's definition of a positive (yes) or negative (no) response. See the system's documentation for nl_langinfo(3), in particular YESEXPR and NOEXPR. When POSIXLY_CORRECT is not set, the patterns are instead taken from find's own message catalogue. -newer Supported. If the file specified is a symbolic link, it is always dereferenced. This is a change from previous behaviour, which used to take the relevant time from the symbolic link; see the HISTORY section below. -perm Supported. If the POSIXLY_CORRECT environment variable is not set, some mode arguments (for example +a+x) which are not valid in POSIX are supported for backward- compatibility. Other primaries The primaries -atime, -ctime, -depth, -exec, -group, -links, -mtime, -nogroup, -nouser, -ok, -path, -print, -prune, -size, -user and -xdev are all supported. The POSIX standard specifies parentheses `(', `)', negation `!' and the logical AND/OR operators -a and -o. All other options, predicates, expressions and so forth are extensions beyond the POSIX standard. Many of these extensions are not unique to GNU find, however. The POSIX standard requires that find detects loops: The find utility shall detect infinite loops; that is, entering a previously visited directory that is an ancestor of the last file encountered. When it detects an infinite loop, find shall write a diagnostic message to standard error and shall either recover its position in the hierarchy or terminate. GNU find complies with these requirements. The link count of directories which contain entries which are hard links to an ancestor will often be lower than they otherwise should be. This can mean that GNU find will sometimes optimise away the visiting of a subdirectory which is actually a link to an ancestor. Since find does not actually enter such a subdirectory, it is allowed to avoid emitting a diagnostic message. Although this behaviour may be somewhat confusing, it is unlikely that anybody actually depends on this behaviour. If the leaf optimisation has been turned off with -noleaf, the directory entry will always be examined and the diagnostic message will be issued where it is appropriate. Symbolic links cannot be used to create filesystem cycles as such, but if the -L option or the -follow option is in use, a diagnostic message is issued when find encounters a loop of symbolic links. As with loops containing hard links, the leaf optimisation will often mean that find knows that it doesn't need to call stat() or chdir() on the symbolic link, so this diagnostic is frequently not necessary. The -d option is supported for compatibility with various BSD systems, but you should use the POSIX-compliant option -depth instead. The POSIXLY_CORRECT environment variable does not affect the behaviour of the -regex or -iregex tests because those tests aren't specified in the POSIX standard. ENVIRONMENT VARIABLES top LANG Provides a default value for the internationalization variables that are unset or null. LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_COLLATE The POSIX standard specifies that this variable affects the pattern matching to be used for the -name option. GNU find uses the fnmatch(3) library function, and so support for LC_COLLATE depends on the system library. This variable also affects the interpretation of the response to -ok; while the LC_MESSAGES variable selects the actual pattern used to interpret the response to -ok, the interpretation of any bracket expressions in the pattern will be affected by LC_COLLATE. LC_CTYPE This variable affects the treatment of character classes used in regular expressions and also with the -name test, if the system's fnmatch(3) library function supports this. This variable also affects the interpretation of any character classes in the regular expressions used to interpret the response to the prompt issued by -ok. The LC_CTYPE environment variable will also affect which characters are considered to be unprintable when filenames are printed; see the section UNUSUAL FILENAMES. LC_MESSAGES Determines the locale to be used for internationalised messages. If the POSIXLY_CORRECT environment variable is set, this also determines the interpretation of the response to the prompt made by the -ok action. NLSPATH Determines the location of the internationalisation message catalogues. PATH Affects the directories which are searched to find the executables invoked by -exec, -execdir, -ok and -okdir. POSIXLY_CORRECT Determines the block size used by -ls and -fls. If POSIXLY_CORRECT is set, blocks are units of 512 bytes. Otherwise they are units of 1024 bytes. Setting this variable also turns off warning messages (that is, implies -nowarn) by default, because POSIX requires that apart from the output for -ok, all messages printed on stderr are diagnostics and must result in a non-zero exit status. When POSIXLY_CORRECT is not set, -perm +zzz is treated just like -perm /zzz if +zzz is not a valid symbolic mode. When POSIXLY_CORRECT is set, such constructs are treated as an error. When POSIXLY_CORRECT is set, the response to the prompt made by the -ok action is interpreted according to the system's message catalogue, as opposed to according to find's own message translations. TZ Affects the time zone used for some of the time-related format directives of -printf and -fprintf. EXAMPLES top Simple `find|xargs` approach Find files named core in or below the directory /tmp and delete them. $ find /tmp -name core -type f -print | xargs /bin/rm -f Note that this will work incorrectly if there are any filenames containing newlines, single or double quotes, or spaces. Safer `find -print0 | xargs -0` approach Find files named core in or below the directory /tmp and delete them, processing filenames in such a way that file or directory names containing single or double quotes, spaces or newlines are correctly handled. $ find /tmp -name core -type f -print0 | xargs -0 /bin/rm -f The -name test comes before the -type test in order to avoid having to call stat(2) on every file. Note that there is still a race between the time find traverses the hierarchy printing the matching filenames, and the time the process executed by xargs works with that file. Processing arbitrary starting points Given that another program proggy pre-filters and creates a huge NUL-separated list of files, process those as starting points, and find all regular, empty files among them: $ proggy | find -files0-from - -maxdepth 0 -type f -empty The use of `-files0-from -` means to read the names of the starting points from standard input, i.e., from the pipe; and -maxdepth 0 ensures that only explicitly those entries are examined without recursing into directories (in the case one of the starting points is one). Executing a command for each file Run file on every file in or below the current directory. $ find . -type f -exec file '{}' \; Notice that the braces are enclosed in single quote marks to protect them from interpretation as shell script punctuation. The semicolon is similarly protected by the use of a backslash, though single quotes could have been used in that case also. In many cases, one might prefer the `-exec ... +` or better the `-execdir ... +` syntax for performance and security reasons. Traversing the filesystem just once - for 2 different actions Traverse the filesystem just once, listing set-user-ID files and directories into /root/suid.txt and large files into /root/big.txt. $ find / \ \( -perm -4000 -fprintf /root/suid.txt '%#m %u %p\n' \) , \ \( -size +100M -fprintf /root/big.txt '%-10s %p\n' \) This example uses the line-continuation character '\' on the first two lines to instruct the shell to continue reading the command on the next line. Searching files by age Search for files in your home directory which have been modified in the last twenty-four hours. $ find $HOME -mtime 0 This command works this way because the time since each file was last modified is divided by 24 hours and any remainder is discarded. That means that to match -mtime 0, a file will have to have a modification in the past which is less than 24 hours ago. Searching files by permissions Search for files which are executable but not readable. $ find /sbin /usr/sbin -executable \! -readable -print Search for files which have read and write permission for their owner, and group, but which other users can read but not write to. $ find . -perm 664 Files which meet these criteria but have other permissions bits set (for example if someone can execute the file) will not be matched. Search for files which have read and write permission for their owner and group, and which other users can read, without regard to the presence of any extra permission bits (for example the executable bit). $ find . -perm -664 This will match a file which has mode 0777, for example. Search for files which are writable by somebody (their owner, or their group, or anybody else). $ find . -perm /222 Search for files which are writable by either their owner or their group. $ find . -perm /220 $ find . -perm /u+w,g+w $ find . -perm /u=w,g=w All three of these commands do the same thing, but the first one uses the octal representation of the file mode, and the other two use the symbolic form. The files don't have to be writable by both the owner and group to be matched; either will do. Search for files which are writable by both their owner and their group. $ find . -perm -220 $ find . -perm -g+w,u+w Both these commands do the same thing. A more elaborate search on permissions. $ find . -perm -444 -perm /222 \! -perm /111 $ find . -perm -a+r -perm /a+w \! -perm /a+x These two commands both search for files that are readable for everybody (-perm -444 or -perm -a+r), have at least one write bit set (-perm /222 or -perm /a+w) but are not executable for anybody (! -perm /111 or ! -perm /a+x respectively). Pruning - omitting files and subdirectories Copy the contents of /source-dir to /dest-dir, but omit files and directories named .snapshot (and anything in them). It also omits files or directories whose name ends in `~', but not their contents. $ cd /source-dir $ find . -name .snapshot -prune -o \( \! -name '*~' -print0 \) \ | cpio -pmd0 /dest-dir The construct -prune -o \( ... -print0 \) is quite common. The idea here is that the expression before -prune matches things which are to be pruned. However, the -prune action itself returns true, so the following -o ensures that the right hand side is evaluated only for those directories which didn't get pruned (the contents of the pruned directories are not even visited, so their contents are irrelevant). The expression on the right hand side of the -o is in parentheses only for clarity. It emphasises that the -print0 action takes place only for things that didn't have -prune applied to them. Because the default `and' condition between tests binds more tightly than -o, this is the default anyway, but the parentheses help to show what is going on. Given the following directory of projects and their associated SCM administrative directories, perform an efficient search for the projects' roots: $ find repo/ \ \( -exec test -d '{}/.svn' \; \ -or -exec test -d '{}/.git' \; \ -or -exec test -d '{}/CVS' \; \ \) -print -prune Sample output: repo/project1/CVS repo/gnu/project2/.svn repo/gnu/project3/.svn repo/gnu/project3/src/.svn repo/project4/.git In this example, -prune prevents unnecessary descent into directories that have already been discovered (for example we do not search project3/src because we already found project3/.svn), but ensures sibling directories (project2 and project3) are found. Other useful examples Search for several file types. $ find /tmp -type f,d,l Search for files, directories, and symbolic links in the directory /tmp passing these types as a comma-separated list (GNU extension), which is otherwise equivalent to the longer, yet more portable: $ find /tmp \( -type f -o -type d -o -type l \) Search for files with the particular name needle and stop immediately when we find the first one. $ find / -name needle -print -quit Demonstrate the interpretation of the %f and %h format directives of the -printf action for some corner-cases. Here is an example including some output. $ find . .. / /tmp /tmp/TRACE compile compile/64/tests/find -maxdepth 0 -printf '[%h][%f]\n' [.][.] [.][..] [][/] [][tmp] [/tmp][TRACE] [.][compile] [compile/64/tests][find] EXIT STATUS top find exits with status 0 if all files are processed successfully, greater than 0 if errors occur. This is deliberately a very broad description, but if the return value is non-zero, you should not rely on the correctness of the results of find. When some error occurs, find may stop immediately, without completing all the actions specified. For example, some starting points may not have been examined or some pending program invocations for -exec ... {} + or -execdir ... {} + may not have been performed. HISTORY top A find program appeared in Version 5 Unix as part of the Programmer's Workbench project and was written by Dick Haight. Doug McIlroy's A Research UNIX Reader: Annotated Excerpts from the Programmers Manual, 1971-1986 provides some additional details; you can read it on-line at <https://www.cs.dartmouth.edu/~doug/reader.pdf>. GNU find was originally written by Eric Decker, with enhancements by David MacKenzie, Jay Plett, and Tim Wood. The idea for find -print0 and xargs -0 came from Dan Bernstein. COMPATIBILITY top As of findutils-4.2.2, shell metacharacters (`*', `?' or `[]' for example) used in filename patterns match a leading `.', because IEEE POSIX interpretation 126 requires this. As of findutils-4.3.3, -perm /000 now matches all files instead of none. Nanosecond-resolution timestamps were implemented in findutils-4.3.3. As of findutils-4.3.11, the -delete action sets find's exit status to a nonzero value when it fails. However, find will not exit immediately. Previously, find's exit status was unaffected by the failure of -delete. Feature Added in Also occurs in -files0-from 4.9.0 -newerXY 4.3.3 BSD -D 4.3.1 -O 4.3.1 -readable 4.3.0 -writable 4.3.0 -executable 4.3.0 -regextype 4.2.24 -exec ... + 4.2.12 POSIX -execdir 4.2.12 BSD -okdir 4.2.12 -samefile 4.2.11 -H 4.2.5 POSIX -L 4.2.5 POSIX -P 4.2.5 BSD -delete 4.2.3 -quit 4.2.3 -d 4.2.3 BSD -wholename 4.2.0 -iwholename 4.2.0 -ignore_readdir_race 4.2.0 -fls 4.0 -ilname 3.8 -iname 3.8 -ipath 3.8 -iregex 3.8 The syntax -perm +MODE was removed in findutils-4.5.12, in favour of -perm /MODE. The +MODE syntax had been deprecated since findutils-4.2.21 which was released in 2005. NON-BUGS top Operator precedence surprises The command find . -name afile -o -name bfile -print will never print afile because this is actually equivalent to find . -name afile -o \( -name bfile -a -print \). Remember that the precedence of -a is higher than that of -o and when there is no operator specified between tests, -a is assumed. paths must precede expression error message $ find . -name *.c -print find: paths must precede expression find: possible unquoted pattern after predicate `-name'? This happens when the shell could expand the pattern *.c to more than one file name existing in the current directory, and passing the resulting file names in the command line to find like this: find . -name frcode.c locate.c word_io.c -print That command is of course not going to work, because the -name predicate allows exactly only one pattern as argument. Instead of doing things this way, you should enclose the pattern in quotes or escape the wildcard, thus allowing find to use the pattern with the wildcard during the search for file name matching instead of file names expanded by the parent shell: $ find . -name '*.c' -print $ find . -name \*.c -print BUGS top There are security problems inherent in the behaviour that the POSIX standard specifies for find, which therefore cannot be fixed. For example, the -exec action is inherently insecure, and -execdir should be used instead. The environment variable LC_COLLATE has no effect on the -ok action. REPORTING BUGS top GNU findutils online help: <https://www.gnu.org/software/findutils/#get-help> Report any translation bugs to <https://translationproject.org/team/> Report any other issue via the form at the GNU Savannah bug tracker: <https://savannah.gnu.org/bugs/?group=findutils> General topics about the GNU findutils package are discussed at the bug-findutils mailing list: <https://lists.gnu.org/mailman/listinfo/bug-findutils> COPYRIGHT top Copyright 1990-2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top chmod(1), locate(1), ls(1), updatedb(1), xargs(1), lstat(2), stat(2), ctime(3) fnmatch(3), printf(3), strftime(3), locatedb(5), regex(7) Full documentation <https://www.gnu.org/software/findutils/find> or available locally via: info find COLOPHON top This page is part of the findutils (find utilities) project. Information about the project can be found at http://www.gnu.org/software/findutils/. If you have a bug report for this manual page, see https://savannah.gnu.org/bugs/?group=findutils. This page was obtained from the project's upstream Git repository git://git.savannah.gnu.org/findutils.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-11.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org FIND(1) Pages that refer to this page: dpkg(1), dpkg-name(1), find-filter(1), grep(1), ippfind(1), locate(1), mkaf(1), pmlogger_daily(1), tar(1), updatedb(1), xargs(1), fts(3), proc(5), hier(7), symlink(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# find\n\n> Find files or directories under a directory tree, recursively.\n> More information: <https://manned.org/find>.\n\n- Find files by extension:\n\n`find {{root_path}} -name '{{*.ext}}'`\n\n- Find files matching multiple path/name patterns:\n\n`find {{root_path}} -path '{{**/path/**/*.ext}}' -or -name '{{*pattern*}}'`\n\n- Find directories matching a given name, in case-insensitive mode:\n\n`find {{root_path}} -type d -iname '{{*lib*}}'`\n\n- Find files matching a given pattern, excluding specific paths:\n\n`find {{root_path}} -name '{{*.py}}' -not -path '{{*/site-packages/*}}'`\n\n- Find files matching a given size range, limiting the recursive depth to "1":\n\n`find {{root_path}} -maxdepth 1 -size {{+500k}} -size {{-10M}}`\n\n- Run a command for each file (use `{}` within the command to access the filename):\n\n`find {{root_path}} -name '{{*.ext}}' -exec {{wc -l}} {} \;`\n\n- Find all files modified today and pass the results to a single command as arguments:\n\n`find {{root_path}} -daystart -mtime {{-1}} -exec {{tar -cvf archive.tar}} {} \+`\n\n- Find empty (0 byte) files and delete them:\n\n`find {{root_path}} -type {{f}} -empty -delete`\n
findfs
findfs(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training findfs(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXIT STATUS | ENVIRONMENT | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY FINDFS(8) System Administration FINDFS(8) NAME top findfs - find a filesystem by label or UUID SYNOPSIS top findfs NAME=value DESCRIPTION top findfs will search the block devices in the system looking for a filesystem or partition with specified tag. The currently supported tags are: LABEL=<label> Specifies filesystem label. UUID=<uuid> Specifies filesystem UUID. PARTUUID=<uuid> Specifies partition UUID. This partition identifier is supported for example for GUID Partition Table (GPT) partition tables. PARTLABEL=<label> Specifies partition label (name). The partition labels are supported for example for GUID Partition Table (GPT) or MAC partition tables. If the filesystem or partition is found, the device name will be printed on stdout. The complete overview about filesystems and partitions you can get for example by lsblk --fs partx --show <disk> blkid -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top 0 success 1 label or uuid cannot be found 2 usage error, wrong number of arguments or unknown option ENVIRONMENT top LIBBLKID_DEBUG=all enables libblkid debug output. AUTHORS top findfs was originally written by Theodore Tso <tytso@mit.edu> and re-written for the util-linux package by Karel Zak <kzak@redhat.com>. SEE ALSO top blkid(8), lsblk(8), partx(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The findfs command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 FINDFS(8) Pages that refer to this page: open_by_handle_at(2), libblkid(3), blkid(8), wipefs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# findfs\n\n> Finds a filesystem by label or UUID.\n> More information: <https://mirrors.edge.kernel.org/pub/linux/utils/util-linux>.\n\n- Search block devices by filesystem label:\n\n`findfs LABEL={{label}}`\n\n- Search by filesystem UUID:\n\n`findfs UUID={{uuid}}`\n\n- Search by partition label (GPT or MAC partition table):\n\n`findfs PARTLABEL={{partition_label}}`\n\n- Search by partition UUID (GPT partition table only):\n\n`findfs PARTUUID={{partition_uuid}}`\n
findmnt
findmnt(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training findmnt(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | ENVIRONMENT | EXAMPLES | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY FINDMNT(8) System Administration FINDMNT(8) NAME top findmnt - find a filesystem SYNOPSIS top findmnt [options] findmnt [options] device|mountpoint findmnt [options] [--source] device [--target path|--mountpoint mountpoint] DESCRIPTION top findmnt will list all mounted filesystems or search for a filesystem. The findmnt command is able to search in /etc/fstab, /etc/mtab or /proc/self/mountinfo. If device or mountpoint is not given, all filesystems are shown. The device may be specified by device name, major:minor numbers, filesystem label or UUID, or partition label or UUID. Note that findmnt follows mount(8) behavior where a device name may be interpreted as a mountpoint (and vice versa) if the --target, --mountpoint or --source options are not specified. The command-line option --target accepts any file or directory and then findmnt displays the filesystem for the given path. The command prints all mounted filesystems in the tree-like format by default. The default output, is subject to change. So whenever possible, you should avoid using default output in your scripts. Always explicitly define expected columns by using --output columns-list in environments where a stable output is required. The relationship between block devices and filesystems is not always one-to-one. The filesystem may use more block devices. This is why findmnt provides SOURCE and SOURCES (pl.) columns. The column SOURCES displays all devices where it is possible to find the same filesystem UUID (or another tag specified in fstab when executed with --fstab and --evaluate). OPTIONS top -A, --all Disable all built-in filters and print all filesystems. -a, --ascii Use ascii characters for tree formatting. -b, --bytes Print the sizes in bytes rather than in a human-readable format. By default, the unit, sizes are expressed in, is byte, and unit prefixes are in power of 2^10 (1024). Abbreviations of symbols are exhibited truncated in order to reach a better readability, by exhibiting alone the first letter of them; examples: "1 KiB" and "1 MiB" are respectively exhibited as "1 K" and "1 M", then omitting on purpose the mention "iB", which is part of these abbreviations. -C, --nocanonicalize Do not canonicalize paths at all. This option affects the comparing of paths and the evaluation of tags (LABEL, UUID, etc.). -c, --canonicalize Canonicalize all printed paths. --deleted Print filesystems where target (mountpoint) is marked as deleted by kernel. -D, --df Imitate the output of df(1). This option is equivalent to -o SOURCE,FSTYPE,SIZE,USED,AVAIL,USE%,TARGET but excludes all pseudo filesystems. Use --all to print all filesystems. -d, --direction word The search direction, either forward or backward. -e, --evaluate Convert all tags (LABEL, UUID, PARTUUID, or PARTLABEL) to the corresponding device names for the SOURCE column. Its an unusual situation, but the same tag may be duplicated (used for more devices). For this purpose, there is SOURCES (pl.) column. This column displays by multi-line cell all devices where the tag is detected by libblkid. This option makes sense for fstab only. -F, --tab-file path Search in an alternative file. If used with --fstab, --mtab or --kernel, then it overrides the default paths. If specified more than once, then tree-like output is disabled (see the --list option). -f, --first-only Print the first matching filesystem only. -H, --list-columns List the available columns, use with --json or --raw to get output in machine-readable format. -i, --invert Invert the sense of matching. -J, --json Use JSON output format. -k, --kernel Search in /proc/self/mountinfo. The output is in the tree-like format. This is the default. The output contains only mount options maintained by kernel (see also --mtab). -l, --list Use the list output format. This output format is automatically enabled if the output is restricted by the -t, -O, -S or -T option and the option --submounts is not used or if more that one source file (the option -F) is specified. -M, --mountpoint path Explicitly define the mountpoint file or directory. See also --target. -m, --mtab Search in /etc/mtab. The output is in the list format by default (see --tree). The output may include user space mount options. -N, --task tid Use alternative namespace /proc/<tid>/mountinfo rather than the default /proc/self/mountinfo. If the option is specified more than once, then tree-like output is disabled (see the --list option). See also the unshare(1) command. -n, --noheadings Do not print a header line. -O, --options list Limit the set of printed filesystems. More than one option may be specified in a comma-separated list. The -t and -O options are cumulative in effect. It is different from -t in that each option is matched exactly; a leading no at the beginning does not have global meaning. The "no" can used for individual items in the list. The "no" prefix interpretation can be disabled by "+" prefix. -o, --output list Define output columns. See the --help output to get a list of the currently supported columns. The TARGET column contains tree formatting if the --list or --raw options are not specified. The default list of columns may be extended if list is specified in the format +list (e.g., findmnt -o +PROPAGATION). --output-all Output almost all available columns. The columns that require --poll are not included. -P, --pairs Produce output in the form of key="value" pairs. All potentially unsafe value characters are hex-escaped (\x<code>). See also option --shell. Note that SOURCES column, use multi-line cells. In these cases, the column use an array-like formatting in the output, for example name=("aaa" "bbb" "ccc"). -p, --poll[=list] Monitor changes in the /proc/self/mountinfo file. Supported actions are: mount, umount, remount and move. More than one action may be specified in a comma-separated list. All actions are monitored by default. The time for which --poll will block can be restricted with the --timeout or --first-only options. The standard columns always use the new version of the information from the mountinfo file, except the umount action which is based on the original information cached by findmnt. The poll mode allows using extra columns: ACTION mount, umount, move or remount action name; this column is enabled by default OLD-TARGET available for umount and move actions OLD-OPTIONS available for umount and remount actions --pseudo Print only pseudo filesystems. --shadow Print only filesystems over-mounted by another filesystem. -R, --submounts Print recursively all submounts for the selected filesystems. The restrictions defined by options -t, -O, -S, -T and --direction are not applied to submounts. All submounts are always printed in tree-like order. The option enables the tree-like output format by default. This option has no effect for --mtab or --fstab. -r, --raw Use raw output format. All potentially unsafe characters are hex-escaped (\x<code>). Note that column SOURCES, use multi-line cells. In these cases, the column may produce more strings on the same line. --real Print only real filesystems. -S, --source spec Explicitly define the mount source. Supported specifications are device, maj:min, LABEL=label, UUID=uuid, PARTLABEL=label and PARTUUID=uuid. -s, --fstab Search in /etc/fstab. The output is in the list format (see --list). -T, --target path Define the mount target. If path is not a mountpoint file or directory, then findmnt checks the path elements in reverse order to get the mountpoint (this feature is supported only when searching in kernel files and unsupported for --fstab). Its recommended to use the option --mountpoint when checks of path elements are unwanted and path is a strictly specified mountpoint. -t, --types list Limit the set of printed filesystems. More than one type may be specified in a comma-separated list. The list of filesystem types can be prefixed with no to specify the filesystem types on which no action should be taken. For more details see mount(8). --tree Enable tree-like output if possible. The options is silently ignored for tables where is missing child-parent relation (e.g., fstab). --shadowed Print only filesystems over-mounted by another filesystem. -U, --uniq Ignore filesystems with duplicate mount targets, thus effectively skipping over-mounted mount points. -u, --notruncate Do not truncate text in columns. The default is to not truncate the TARGET, SOURCE, UUID, LABEL, PARTUUID, PARTLABEL columns. This option disables text truncation also in all other columns. -v, --nofsroot Do not print a [/dir] in the SOURCE column for bind mounts or btrfs subvolumes. -w, --timeout milliseconds Specify an upper limit on the time for which --poll will block, in milliseconds. -x, --verify Check mount table content. The default is to verify /etc/fstab parsability and usability. Its possible to use this option also with --tab-file. Its possible to specify source (device) or target (mountpoint) to filter mount table. The option --verbose forces findmnt to print more details. --verbose Force findmnt to print more information (--verify only for now). --vfs-all When used with VFS-OPTIONS column, print all VFS (fs-independent) flags. This option is designed for auditing purposes to list also default VFS kernel mount options which are normally not listed. -y, --shell The column name will be modified to contain only characters allowed for shell variable identifiers. This is usable, for example, with --pairs. Note that this feature has been automatically enabled for --pairs in version 2.37, but due to compatibility issues, now its necessary to request this behavior by --shell. -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top The exit value is 0 if there is something to display, or 1 on any error (for example if no filesystem is found based on the users filter specification, or the device path or mountpoint does not exist). ENVIRONMENT top LIBMOUNT_FSTAB=<path> overrides the default location of the fstab file LIBMOUNT_MTAB=<path> overrides the default location of the mtab file LIBMOUNT_DEBUG=all enables libmount debug output LIBSMARTCOLS_DEBUG=all enables libsmartcols debug output LIBSMARTCOLS_DEBUG_PADDING=on use visible padding characters. EXAMPLES top findmnt --fstab -t nfs Prints all NFS filesystems defined in /etc/fstab. findmnt --fstab /mnt/foo Prints all /etc/fstab filesystems where the mountpoint directory is /mnt/foo. It also prints bind mounts where /mnt/foo is a source. findmnt --fstab --target /mnt/foo Prints all /etc/fstab filesystems where the mountpoint directory is /mnt/foo. findmnt --fstab --evaluate Prints all /etc/fstab filesystems and converts LABEL= and UUID= tags to the real device names. findmnt -n --raw --evaluate --output=target LABEL=/boot Prints only the mountpoint where the filesystem with label "/boot" is mounted. findmnt --poll --mountpoint /mnt/foo Monitors mount, unmount, remount and move on /mnt/foo. findmnt --poll=umount --first-only --mountpoint /mnt/foo Waits for /mnt/foo unmount. findmnt --poll=remount -t ext3 -O ro Monitors remounts to read-only mode on all ext3 filesystems. AUTHORS top Karel Zak <kzak@redhat.com> SEE ALSO top fstab(5), mount(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The findmnt command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.1041-8a7c 2023-12-22 FINDMNT(8) Pages that refer to this page: eject(1), mount(2), fstab(5), mount_namespaces(7), lsblk(8), mount(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# findmnt\n\n> Find your filesystem.\n> More information: <https://manned.org/findmnt>.\n\n- List all mounted filesystems:\n\n`findmnt`\n\n- Search for a device:\n\n`findmnt {{/dev/sdb1}}`\n\n- Search for a mountpoint:\n\n`findmnt {{/}}`\n\n- Find filesystems in specific type:\n\n`findmnt -t {{ext4}}`\n\n- Find filesystems with specific label:\n\n`findmnt LABEL={{BigStorage}}`\n
firejail
firejail(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training firejail(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | USAGE | OPTIONS | NAME | DESKTOP INTEGRATION | EXAMPLES | FILE GLOBBING | FILE TRANSFER | MONITORING | RESTRICTED SHELL | SECURITY PROFILES | TRAFFIC SHAPING | LICENSE | SEE ALSO | COLOPHON FIREJAIL(1) firejail man page FIREJAIL(1) NAME top Firejail - Linux namespaces sandbox program SYNOPSIS top Start a sandbox: firejail [OPTIONS] [program and arguments] Start an AppImage program: firejail [OPTIONS] --appimage [OPTIONS] [appimage-file and arguments] File transfer from an existing sandbox firejail {--ls | --get | --put | --cat} dir_or_filename Network traffic shaping for an existing sandbox: firejail --bandwidth={name|pid} bandwidth-command Monitoring: firejail {--list | --netstats | --top | --tree} Miscellaneous: firejail {-? | --debug-caps | --debug-errnos | --debug- syscalls | --debug-syscalls32 | --debug-protocols | --help | --version} DESCRIPTION top Firejail is a SUID sandbox program that reduces the risk of security breaches by restricting the running environment of untrusted applications using Linux namespaces, seccomp-bpf and Linux capabilities. It allows a process and all its descendants to have their own private view of the globally shared kernel resources, such as the network stack, process table, mount table. Firejail can work in a SELinux or AppArmor environment, and it is integrated with Linux Control Groups. Written in C with virtually no dependencies, the software runs on any Linux computer with a 3.x kernel version or newer. It can sandbox any type of processes: servers, graphical applications, and even user login sessions. Firejail allows the user to manage application security using security profiles. Each profile defines a set of permissions for a specific application or group of applications. The software includes security profiles for a number of more common Linux programs, such as Mozilla Firefox, Chromium, VLC, Transmission etc. Firejail is currently implemented as an SUID binary, which means that if a malicious or compromised user account manages to exploit a bug in Firejail, that could ultimately lead to a privilege escalation to root. To mitigate this, it is recommended to only allow trusted users to run firejail (see firejail-users(5) for details on how to achieve that). For more details on the security/usability tradeoffs of Firejail, see: #4601 https://github.com/netblue30/firejail/discussions/4601 Alternative sandbox technologies like snap (https://snapcraft.io/) and flatpak (https://flatpak.org/) are not supported. Snap and flatpak packages have their own native management tools and will not work when sandboxed with Firejail. USAGE top Without any options, the sandbox consists of a filesystem build in a new mount namespace, and new PID and UTS namespaces. IPC, network and user namespaces can be added using the command line options. The default Firejail filesystem is based on the host filesystem with the main system directories mounted read-only. These directories are /etc, /var, /usr, /bin, /sbin, /lib, /lib32, /libx32 and /lib64. Only /home and /tmp are writable. Upon execution Firejail first looks in ~/.config/firejail/ for a profile and if it doesn't find one, it looks in /etc/firejail/. For profile resolution detail see https://github.com/netblue30/firejail/wiki/Creating-Profiles#locations-and-types. If an appropriate profile is not found, Firejail will use a default profile. The default profile is quite restrictive. In case the application doesn't work, use --noprofile option to disable it. For more information, please see SECURITY PROFILES section below. If a program argument is not specified, Firejail starts the user's preferred shell. Examples: $ firejail [OPTIONS] # starting the program specified in $SHELL, usually /bin/bash $ firejail [OPTIONS] firefox # starting Mozilla Firefox # sudo firejail [OPTIONS] /etc/init.d/nginx start OPTIONS top -- Signal the end of options and disables further option processing. --allow-debuggers Allow tools such as strace and gdb inside the sandbox by whitelisting system calls ptrace and process_vm_readv. This option is only available when running on Linux kernels 4.8 or newer - a kernel bug in ptrace system call allows a full bypass of the seccomp filter. Example: $ firejail --allow-debuggers --profile=/etc/firejail/firefox.profile strace -f firefox --allusers All directories under /home are visible inside the sandbox. By default, only current user home directory is visible. Example: $ firejail --allusers --appimage Sandbox an AppImage (https://appimage.org/) application. If the sandbox is started as a regular user, nonewprivs and a default capabilities filter are enabled. private- bin and private-lib are disabled by default when running appimages. Example: $ firejail --appimage --profile=krita krita-3.0-x86_64.appimage $ firejail --quiet --appimage --private --profile=krita krita-3.0-x86_64.appimage $ firejail --appimage --net=none --x11 --profile=krita krita-3.0-x86_64.appimage Note: When using both --appimage and --profile, it is recommended to always specify the former before the latter, so that any ?HAS_APPIMAGE conditionals inside of the profile evaluate to true (see ?CONDITIONAL in firejail-profile(5)). --bandwidth=name|pid Set bandwidth limits for the sandbox identified by name or PID, see TRAFFIC SHAPING section for more details. --bind=filename1,filename2 Mount-bind filename1 on top of filename2. This option is only available when running as root. Example: # firejail --bind=/config/etc/passwd,/etc/passwd --blacklist=dirname_or_filename Blacklist directory or file. File globbing is supported, see FILE GLOBBING section for more details. Symbolic link handling: Blacklisting a path that is a symbolic link will also blacklist the path that it points to. For example, if ~/foo is blacklisted and it points to /bar, then /bar will also be blacklisted. Example: $ firejail --blacklist=/sbin --blacklist=/usr/sbin $ firejail --blacklist=~/.mozilla $ firejail "--blacklist=/home/username/My Virtual Machines" $ firejail --blacklist=/home/username/My\ Virtual\ Machines --build The command builds a whitelisted profile. The profile is printed on the screen. The program is run in a very relaxed sandbox, with only --caps.drop=all and --seccomp=!chroot. Programs that raise user privileges are not supported. Example: $ firejail --build vlc ~/Videos/test.mp4 $ firejail --build --appimage ~/Downloads/Subsurface.AppImage --build=profile-file The command builds a whitelisted profile, and saves it in profile-file. The program is run in a very relaxed sandbox, with only --caps.drop=all and --seccomp=!chroot. Programs that raise user privileges are not supported. Example: $ firejail --build=vlc.profile vlc ~/Videos/test.mp4 $ firejail --build=Subsurface.profile --appimage ~/Downloads/Subsurface.AppImage -c Login shell compatibility option. This option is use by some login programs when executing the login shell, such as when firejail is used as a restricted login shell. It currently does not change the execution of firejail. --caps Linux capabilities is a kernel feature designed to split up the root privilege into a set of distinct privileges. These privileges can be enabled or disabled independently, thus restricting what a process running as root can do in the system. See capabilities(7) for details. By default root programs run with all capabilities enabled. --caps option disables the following capabilities: CAP_SYS_MODULE, CAP_SYS_RAWIO, CAP_SYS_BOOT, CAP_SYS_NICE, CAP_SYS_TTY_CONFIG, CAP_SYSLOG, CAP_MKNOD, CAP_SYS_ADMIN. The filter is applied to all processes started in the sandbox. Example: $ sudo firejail --caps /etc/init.d/nginx start --caps.drop=all Drop all capabilities for the processes running in the sandbox. This option is recommended for running GUI programs or any other program that doesn't require root privileges. It is a must-have option for sandboxing untrusted programs installed from unofficial sources - such as games, Java programs, etc. Example: $ firejail --caps.drop=all warzone2100 --caps.drop=capability,capability,capability Define a custom blacklist Linux capabilities filter. Example: $ firejail --caps.drop=net_broadcast,net_admin,net_raw --caps.keep=capability,capability,capability Define a custom whitelist Linux capabilities filter. Example: $ sudo firejail --caps.keep=chown,net_bind_service,setgid,\ setuid /etc/init.d/nginx start --caps.print=name|pid Print the caps filter for the sandbox identified by name or by PID. Example: $ firejail --name=mygame --caps.drop=all warzone2100 & $ firejail --caps.print=mygame Example: $ firejail --list 3272:netblue::firejail --private firefox $ firejail --caps.print=3272 --cat=name|pid filename Print content of file from sandbox container, see FILE TRANSFER section for more details. --chroot=dirname Chroot the sandbox into a root filesystem. Unlike the regular filesystem container, the system directories are mounted read-write. If the sandbox is started as a regular user, nonewprivs and a default capabilities filter are enabled. Example: $ firejail --chroot=/media/ubuntu warzone2100 For automatic mounting of X11 and PulseAudio sockets set environment variables FIREJAIL_CHROOT_X11 and FIREJAIL_CHROOT_PULSE. Note: Support for this command is controlled in firejail.config with the chroot option. --cpu=cpu-number,cpu-number,cpu-number Set CPU affinity. Example: $ firejail --cpu=0,1 handbrake --cpu.print=name|pid Print the CPU cores in use by the sandbox identified by name or by PID. Example: $ firejail --name=mygame --caps.drop=all warzone2100 & $ firejail --cpu.print=mygame Example: $ firejail --list 3272:netblue::firejail --private firefox $ firejail --cpu.print=3272 --dbus-log=file Specify the location for the DBus log file. The log file contains events for both the system and session buses if both of the --dbus-system.log and --dbus- user.log options are specified. If no log file path is given, logs are written to the standard output instead. Example: $ firejail --dbus-system=filter --dbus-system.log \ --dbus-log=dbus.txt --dbus-system=filter|none Set system DBus sandboxing policy. The filter policy enables the system DBus filter. This option requires installing the xdg-dbus-proxy utility. Permissions for well-known can be specified with the --dbus-system.talk and --dbus-system.own options. The none policy disables access to the system DBus. Only the regular system DBus UNIX socket is handled by this option. To disable the abstract sockets (and force applications to use the filtered UNIX socket) you would need to request a new network namespace using --net command. Another option is to remove unix from the --protocol set. Example: $ firejail --dbus-system=none --dbus-system.broadcast=name=[member][@path] Allows the application to receive broadcast signals from the indicated interface member at the indicated object path exposed by the indicated bus name on the system DBus. The name may have a .* suffix to match all names underneath it, including itself. The interface member may have a .* to match all members of an interface, or be * to match all interfaces. The path may have a /* suffix to indicate all objects underneath it, including itself. Omitting the interface member or the object path will match all members and object paths, respectively. Example: $ firejail --dbus-system=filter --dbus-system.broadcast=\ org.freedesktop.Notifications=\ org.freedesktop.Notifications.*@/org/freedesktop/Notifications --dbus-system.call=name=[member][@path] Allows the application to call the indicated interface member at the indicated object path exposed by the indicated bus name on the system DBus. The name may have a .* suffix to match all names underneath it, including itself. The interface member may have a .* to match all members of an interface, or be * to match all interfaces. The path may have a /* suffix to indicate all objects underneath it, including itself. Omitting the interface member or the object path will match all members and object paths, respectively. Example: $ firejail --dbus-system=filter --dbus-system.call=\ org.freedesktop.Notifications=\ org.freedesktop.Notifications.*@/org/freedesktop/Notifications --dbus-system.log Turn on DBus logging for the system DBus. This option requires --dbus-system=filter. Example: $ firejail --dbus-system=filter --dbus-system.log --dbus-system.own=name Allows the application to own the specified well-known name on the system DBus. The name may have a .* suffix to match all names underneath it, including itself (e.g. "foo.bar.*" matches "foo.bar", "foo.bar.baz" and "foo.bar.baz.quux", but not "foobar"). Example: $ firejail --dbus-system=filter --dbus-system.own=\ org.gnome.ghex.* --dbus-system.see=name Allows the application to see, but not talk to the specified well-known name on the system DBus. The name may have a .* suffix to match all names underneath it, including itself (e.g. "foo.bar.*" matches "foo.bar", "foo.bar.baz" and "foo.bar.baz.quux", but not "foobar"). Example: $ firejail --dbus-system=filter --dbus-system.see=\ org.freedesktop.Notifications --dbus-system.talk=name Allows the application to talk to the specified well-known name on the system DBus. The name may have a .* suffix to match all names underneath it, including itself (e.g. "foo.bar.*" matches "foo.bar", "foo.bar.baz" and "foo.bar.baz.quux", but not "foobar"). Example: $ firejail --dbus-system=filter --dbus-system.talk=\ org.freedesktop.Notifications --dbus-user=filter|none Set session DBus sandboxing policy. The filter policy enables the session DBus filter. This option requires installing the xdg-dbus-proxy utility. Permissions for well-known names can be added with the --dbus-user.talk and --dbus-user.own options. The none policy disables access to the session DBus. Only the regular session DBus UNIX socket is handled by this option. To disable the abstract sockets (and force applications to use the filtered UNIX socket) you would need to request a new network namespace using --net command. Another option is to remove unix from the --protocol set. Example: $ firejail --dbus-user=none --dbus-user.broadcast=name=[member][@path] Allows the application to receive broadcast signals from the indicated interface member at the indicated object path exposed by the indicated bus name on the session DBus. The name may have a .* suffix to match all names underneath it, including itself. The interface member may have a .* to match all members of an interface, or be * to match all interfaces. The path may have a /* suffix to indicate all objects underneath it, including itself. Omitting the interface member or the object path will match all members and object paths, respectively. Example: $ firejail --dbus-user=filter --dbus-user.broadcast=\ org.freedesktop.Notifications=\ org.freedesktop.Notifications.*@/org/freedesktop/Notifications --dbus-user.call=name=[member][@path] Allows the application to call the indicated interface member at the indicated object path exposed by the indicated bus name on the session DBus. The name may have a .* suffix to match all names underneath it, including itself. The interface member may have a .* to match all members of an interface, or be * to match all interfaces. The path may have a /* suffix to indicate all objects underneath it, including itself. Omitting the interface member or the object path will match all members and object paths, respectively. Example: $ firejail --dbus-user=filter --dbus-user.call=\ org.freedesktop.Notifications=\ org.freedesktop.Notifications.*@/org/freedesktop/Notifications --dbus-user.log Turn on DBus logging for the session DBus. This option requires --dbus-user=filter. Example: $ firejail --dbus-user=filter --dbus-user.log --dbus-user.own=name Allows the application to own the specified well-known name on the session DBus. The name may have a .* suffix to match all names underneath it, including itself (e.g. "foo.bar.*" matches "foo.bar", "foo.bar.baz" and "foo.bar.baz.quux", but not "foobar"). Example: $ firejail --dbus-user=filter --dbus- user.own=org.gnome.ghex.* --dbus-user.talk=name Allows the application to talk to the specified well-known name on the session DBus. The name may have a .* suffix to match all names underneath it, including itself (e.g. "foo.bar.*" matches "foo.bar", "foo.bar.baz" and "foo.bar.baz.quux", but not "foobar"). Example: $ firejail --dbus-user=filter --dbus-user.talk=\ org.freedesktop.Notifications --dbus-user.see=name Allows the application to see, but not talk to the specified well-known name on the session DBus. The name may have a .* suffix to match all names underneath it, including itself (e.g. "foo.bar.*" matches "foo.bar", "foo.bar.baz" and "foo.bar.baz.quux", but not "foobar"). Example: $ firejail --dbus-user=filter --dbus-user.see=\ org.freedesktop.Notifications --debug Print debug messages. Example: $ firejail --debug firefox --debug-blacklists Debug blacklisting. Example: $ firejail --debug-blacklists firefox --debug-caps Print all recognized capabilities in the current Firejail software build and exit. Example: $ firejail --debug-caps --debug-errnos Print all recognized error numbers in the current Firejail software build and exit. Example: $ firejail --debug-errnos --debug-protocols Print all recognized protocols in the current Firejail software build and exit. Example: $ firejail --debug-protocols --debug-syscalls Print all recognized system calls in the current Firejail software build and exit. Example: $ firejail --debug-syscalls --debug-syscalls32 Print all recognized 32 bit system calls in the current Firejail software build and exit. --debug-whitelists Debug whitelisting. Example: $ firejail --debug-whitelists firefox --defaultgw=address Use this address as default gateway in the new network namespace. Example: $ firejail --net=eth0 --defaultgw=10.10.20.1 firefox --deterministic-exit-code Always exit firejail with the first child's exit status. The default behavior is to use the exit status of the final child to exit, which can be nondeterministic. --deterministic-shutdown Always shut down the sandbox after the first child has terminated. The default behavior is to keep the sandbox alive as long as it contains running processes. --disable-mnt Blacklist /mnt, /media, /run/mount and /run/media access. Example: $ firejail --disable-mnt firefox --dns=address Set a DNS server for the sandbox. Up to three DNS servers can be defined. Use this option if you don't trust the DNS setup on your network. Example: $ firejail --dns=8.8.8.8 --dns=8.8.4.4 firefox Note: this feature is not supported on systemd-resolved setups. --dns.print=name|pid Print DNS configuration for a sandbox identified by name or by PID. Example: $ firejail --name=mygame --caps.drop=all warzone2100 & $ firejail --dns.print=mygame Example: $ firejail --list 3272:netblue::firejail --private firefox $ firejail --dns.print=3272 --dnstrace[=name|pid] Monitor DNS queries. The sandbox can be specified by name or pid. Only networked sandboxes created with --net are supported. This option is only available when running the sandbox as root. Without a name/pid, Firejail will monitor the main system network namespace. Example: $ sudo firejail --dnstrace 11:31:43 9.9.9.9 linux.com (type 1) 11:31:45 9.9.9.9 fonts.googleapis.com (type 1) NXDOMAIN 11:31:45 9.9.9.9 js.hs-scripts.com (type 1) NXDOMAIN 11:31:45 9.9.9.9 www.linux.com (type 1) 11:31:45 9.9.9.9 fonts.googleapis.com (type 1) NXDOMAIN 11:31:52 9.9.9.9 js.hs-scripts.com (type 1) NXDOMAIN 11:32:05 9.9.9.9 secure.gravatar.com (type 1) 11:32:06 9.9.9.9 secure.gravatar.com (type 1) 11:32:08 9.9.9.9 taikai.network (type 1) 11:32:08 9.9.9.9 cdn.jsdelivr.net (type 1) 11:32:08 9.9.9.9 taikai.azureedge.net (type 1) 11:32:08 9.9.9.9 www.youtube.com (type 1) --env=name=value Set environment variable in the new sandbox. Example: $ firejail --env=LD_LIBRARY_PATH=/opt/test/lib --fs.print=name|pid Print the filesystem log for the sandbox identified by name or by PID. Example: $ firejail --name=mygame --caps.drop=all warzone2100 & $ firejail --fs.print=mygame Example: $ firejail --list 3272:netblue::firejail --private firefox $ firejail --fs.print=3272 --get=name|pid filename Get a file from sandbox container, see FILE TRANSFER section for more details. -?, --help Print options end exit. --hostname=name Set sandbox hostname. For valid names, see the NAME VALIDATION section. Example: $ firejail --hostname=officepc firefox --hosts-file=file Use file as /etc/hosts. Example: $ firejail --hosts-file=~/myhosts firefox --ignore=command Ignore command in profile file. Example: $ firejail --ignore=seccomp --ignore=caps firefox $ firejail --ignore="net eth0" firefox --icmptrace[=name|pid] Monitor ICMP traffic. The sandbox can be specified by name or pid. Only networked sandboxes created with --net are supported. This option is only available when running the sandbox as root. Without a name/pid, Firejail will monitor the main system network namespace. Example $ sudo firejail --icmptrace 20:53:54 192.168.1.60 -> 142.250.65.174 - 98 bytes - Echo request/0 20:53:54 142.250.65.174 -> 192.168.1.60 - 98 bytes - Echo reply/0 20:53:55 192.168.1.60 -> 142.250.65.174 - 98 bytes - Echo request/0 20:53:55 142.250.65.174 -> 192.168.1.60 - 98 bytes - Echo reply/0 20:53:55 192.168.1.60 -> 1.1.1.1 - 154 bytes - Destination unreachable/Port unreachable --include=file.profile Include a profile file before the regular profiles are used. Example: $ firejail --include=/etc/firejail/disable-devel.inc gedit --interface=interface Move interface in a new network namespace. Up to four --interface options can be specified. Note: wlan devices are not supported for this option. Example: $ firejail --interface=eth1 --interface=eth0.vlan100 --ip=address Assign IP addresses to the last network interface defined by a --net option. A default gateway is assigned by default. Example: $ firejail --net=eth0 --ip=10.10.20.56 firefox --ip=none No IP address and no default gateway are configured for the last interface defined by a --net option. Use this option in case you intend to start an external DHCP client in the sandbox. Example: $ firejail --net=eth0 --ip=none If the corresponding interface doesn't have an IP address configured, this option is enabled by default. --ip=dhcp Acquire an IP address and default gateway for the last interface defined by a --net option, as well as set the DNS servers according to the DHCP response. This option requires the ISC dhclient DHCP client to be installed and will start it automatically inside the sandbox. Example: $ firejail --net=br0 --ip=dhcp This option should not be used in conjunction with the --dns option if the DHCP server is set to configure DNS servers for the clients, because the manually specified DNS servers will be overwritten. The DHCP client will NOT release the DHCP lease when the sandbox terminates. If your DHCP server requires leases to be explicitly released, consider running a DHCP client and releasing the lease manually in conjunction with the --net=none option. --ip6=address Assign IPv6 addresses to the last network interface defined by a --net option. Example: $ firejail --net=eth0 --ip6=2001:0db8:0:f101::1/64 firefox Note: you don't need this option if you obtain your ip6 address from router via SLAAC (your ip6 address and default route will be configured by kernel automatically). --ip6=dhcp Acquire an IPv6 address and default gateway for the last interface defined by a --net option, as well as set the DNS servers according to the DHCP response. This option requires the ISC dhclient DHCP client to be installed and will start it automatically inside the sandbox. Example: $ firejail --net=br0 --ip6=dhcp This option should not be used in conjunction with the --dns option if the DHCP server is set to configure DNS servers for the clients, because the manually specified DNS servers will be overwritten. The DHCP client will NOT release the DHCP lease when the sandbox terminates. If your DHCP server requires leases to be explicitly released, consider running a DHCP client and releasing the lease manually. --iprange=address,address Assign an IP address in the provided range to the last network interface defined by a --net option. A default gateway is assigned by default. Example: $ firejail --net=eth0 --iprange=192.168.1.100,192.168.1.150 --ipc-namespace Enable a new IPC namespace if the sandbox was started as a regular user. IPC namespace is enabled by default for sandboxes started as root. Example: $ firejail --ipc-namespace firefox --join=name|pid Join the sandbox identified by name or by PID. By default a /bin/bash shell is started after joining the sandbox. If a program is specified, the program is run in the sandbox. If --join command is issued as a regular user, all security filters are configured for the new process the same they are configured in the sandbox. If --join command is issued as root, the security filters and cpus configurations are not applied to the process joining the sandbox. Example: $ firejail --name=mygame --caps.drop=all warzone2100 & $ firejail --join=mygame Example: $ firejail --list 3272:netblue::firejail --private firefox $ firejail --join=3272 --join-filesystem=name|pid Join the mount namespace of the sandbox identified by name or PID. By default a /bin/bash shell is started after joining the sandbox. If a program is specified, the program is run in the sandbox. This command is available only to root user. Security filters and cpus configurations are not applied to the process joining the sandbox. --join-network=name|pid Join the network namespace of the sandbox identified by name. By default a /bin/bash shell is started after joining the sandbox. If a program is specified, the program is run in the sandbox. This command is available only to root user. Security filters and cpus configurations are not applied to the process joining the sandbox. Example: # start firefox $ firejail --net=eth0 --name=browser firefox & # change netfilter configuration $ sudo firejail --join-network=browser bash -c "cat /etc/firejail/nolocal.net | /sbin/iptables-restore" # verify netfilter configuration $ sudo firejail --join-network=browser /sbin/iptables -vL # verify IP addresses $ sudo firejail --join-network=browser ip addr Switching to pid 1932, the first child process inside the sandbox 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0-1931: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 76:58:14:42:78:e4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.158/24 brd 192.168.1.255 scope global eth0-1931 valid_lft forever preferred_lft forever inet6 fe80::7458:14ff:fe42:78e4/64 scope link valid_lft forever preferred_lft forever --join-or-start=name Join the sandbox identified by name or start a new one. Same as "firejail --join=name" if sandbox with specified name exists, otherwise same as "firejail --name=name ...". See --name for details. Note that in contrary to other join options there is respective profile option. --keep-config-pulse Disable automatic ~/.config/pulse init, for complex setups such as remote pulse servers or non-standard socket paths. Example: $ firejail --keep-config-pulse firefox --keep-dev-shm /dev/shm directory is untouched (even with --private-dev) Example: $ firejail --keep-dev-shm --private-dev --keep-fd=all Inherit all open file descriptors to the sandbox. By default only file descriptors 0, 1 and 2 are inherited to the sandbox, and all other file descriptors are closed. Example: $ firejail --keep-fd=all --keep-fd=file_descriptor Don't close specified open file descriptors. By default only file descriptors 0, 1 and 2 are inherited to the sandbox, and all other file descriptors are closed. Example: $ firejail --keep-fd=3,4,5 --keep-shell-rc By default, when using a private home directory, firejail copies files from the system's user home template (/etc/skel) into it, which overrides attempts to whitelist the original files (such as ~/.bashrc and ~/.zshrc). This option disables this feature, and enables the user to whitelist the original files. --keep-var-tmp /var/tmp directory is untouched. Example: $ firejail --keep-var-tmp --list List all sandboxes, see MONITORING section for more details. Example: $ firejail --list 7015:netblue:browser:firejail firefox 7056:netblue:torrent:firejail --net=eth0 transmission-gtk 7064:netblue::firejail --noroot xterm --ls=name|pid dir_or_filename List files in sandbox container, see FILE TRANSFER section for more details. --mac=address Assign MAC addresses to the last network interface defined by a --net option. This option is not supported for wireless interfaces. Example: $ firejail --net=eth0 --mac=00:11:22:33:44:55 firefox --machine-id Spoof id number in /etc/machine-id file - a new random id is generated inside the sandbox. Note that this breaks audio support. Enable it when sound is not required. Example: $ firejail --machine-id --mkdir=dirname Create a directory in user home. Parent directories are created as needed. Example: $ firejail --mkdir=~/work/project --mkfile=filename Create an empty file in user home. Example: $ firejail --mkfile=~/work/project/readme --memory-deny-write-execute Install a seccomp filter to block attempts to create memory mappings that are both writable and executable, to change mappings to be executable, or to create executable shared memory. The filter examines the arguments of mmap, mmap2, mprotect, pkey_mprotect, memfd_create and shmat system calls and returns error EPERM to the process (or kills it or log the attempt, see --seccomp-error-action below) if necessary. Note: shmat is not implemented as a system call on some platforms including i386, and it cannot be handled by seccomp-bpf. --mtu=number Assign a MTU value to the last network interface defined by a --net option. Example: $ firejail --net=eth0 --mtu=1492 --name=name Set sandbox name. Several options, such as --join and --shutdown, can use this name to identify a sandbox. The name cannot contain only digits, as that is treated as a PID in the other options, such as in --join. For valid names, see the NAME VALIDATION section. In case the name supplied by the user is already in use by another sandbox, Firejail will assign a new name as "name- PID", where PID is the process ID of the sandbox. This functionality can be disabled at run time in /etc/firejail/firejail.config file, by setting "name- change" flag to "no". Example: $ firejail --name=browser firefox & $ firejail --name=browser --private firefox --no-remote & $ firejail --list 1198:netblue:browser:firejail --name=browser firefox 1312:netblue:browser-1312:firejail --name=browser --private firefox --no-remote --net=bridge_interface Enable a new network namespace and connect it to this bridge interface. Unless specified with option --ip and --defaultgw, an IP address and a default gateway will be assigned automatically to the sandbox. The IP address is verified using ARP before assignment. The address configured as default gateway is the bridge device IP address. Up to four --net options can be specified. Example: $ sudo brctl addbr br0 $ sudo ifconfig br0 10.10.20.1/24 $ sudo brctl addbr br1 $ sudo ifconfig br1 10.10.30.1/24 $ firejail --net=br0 --net=br1 --net=ethernet_interface|wireless_interface Enable a new network namespace and connect it to this ethernet interface using the standard Linux macvlan|ipvlan driver. Unless specified with option --ip and --defaultgw, an IP address and a default gateway will be assigned automatically to the sandbox. The IP address is verified using ARP before assignment. The address configured as default gateway is the default gateway of the host. Up to four --net options can be specified. Support for ipvlan driver was introduced in Linux kernel 3.19. Example: $ firejail --net=eth0 --ip=192.168.1.80 --dns=8.8.8.8 firefox $ firejail --net=wlan0 firefox --net=none Enable a new, unconnected network namespace. The only interface available in the new namespace is a new loopback interface (lo). Use this option to deny network access to programs that don't really need network access. Example: $ firejail --net=none vlc Note: --net=none can crash the application on some platforms. In these cases, it can be replaced with --protocol=unix. --net=tap_interface Enable a new network namespace and connect it to this ethernet tap interface using the standard Linux macvlan driver. If the tap interface is not configured, the sandbox will not try to configure the interface inside the sandbox. Please use --ip, --netmask and --defaultgw to specify the configuration. Example: $ firejail --net=tap0 --ip=10.10.20.80 --netmask=255.255.255.0 --defaultgw=10.10.20.1 firefox --net.print=name|pid If a new network namespace is enabled, print network interface configuration for the sandbox specified by name or PID. Example: $ firejail --net.print=browser Switching to pid 1853, the first child process inside the sandbox Interface MAC IP Mask Status lo 127.0.0.1 255.0.0.0 UP eth0-1852 5e:fb:8e:27:29:26 192.168.1.186 255.255.255.0 UP --netfilter Enable a default firewall if a new network namespace is created inside the sandbox. This option has no effect for sandboxes using the system network namespace. The default firewall is optimized for regular desktop applications. No incoming connections are accepted: *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # allow ping -A INPUT -p icmp --icmp-type destination-unreachable -j ACCEPT -A INPUT -p icmp --icmp-type time-exceeded -j ACCEPT -A INPUT -p icmp --icmp-type echo-request -j ACCEPT # drop STUN (WebRTC) requests -A OUTPUT -p udp --dport 3478 -j DROP -A OUTPUT -p udp --dport 3479 -j DROP -A OUTPUT -p tcp --dport 3478 -j DROP -A OUTPUT -p tcp --dport 3479 -j DROP COMMIT Example: $ firejail --net=eth0 --netfilter firefox --netfilter=filename Enable the firewall specified by filename if a new network namespace is created inside the sandbox. This option has no effect for sandboxes using the system network namespace. Please use the regular iptables-save/iptables-restore format for the filter file. The following examples are available in /etc/firejail directory: webserver.net is a webserver firewall that allows access only to TCP ports 80 and 443. Example: $ firejail --netfilter=/etc/firejail/webserver.net --net=eth0 \ /etc/init.d/apache2 start nolocal.net/nolocal6.net is a desktop client firewall that disable access to local network. Example: $ firejail --netfilter=/etc/firejail/nolocal.net \ --net=eth0 firefox --netfilter=filename,arg1,arg2,arg3 ... This is the template version of the previous command. $ARG1, $ARG2, $ARG3 ... in the firewall script are replaced with arg1, arg2, arg3 ... passed on the command line. Up to 16 arguments are supported. Example: $ firejail --net=eth0 --ip=192.168.1.105 \ --netfilter=/etc/firejail/tcpserver.net,5001 server- program --netfilter.print=name|pid Print the firewall installed in the sandbox specified by name or PID. Example: $ firejail --name=browser --net=eth0 --netfilter firefox & $ firejail --netfilter.print=browser --netfilter6=filename Enable the IPv6 firewall specified by filename if a new network namespace is created inside the sandbox. This option has no effect for sandboxes using the system network namespace. Please use the regular iptables- save/iptables-restore format for the filter file. --netfilter6.print=name|pid Print the IPv6 firewall installed in the sandbox specified by name or PID. Example: $ firejail --name=browser --net=eth0 --netfilter firefox & $ firejail --netfilter6.print=browser --netlock Several type of programs (email clients, multiplayer games etc.) talk to a very small number of IP addresses. But the best example is tor browser. It only talks to a guard node, and there are two or three more on standby in case the main one fails. During startup, the browser contacts all of them, after that it keeps talking to the main one... for weeks! Use the network locking feature to build and deploy a custom network firewall in your sandbox. The firewall allows only the traffic to the IP addresses detected during the program startup. Traffic to any other address is quietly dropped. By default the network monitoring time is one minute. A network namespace (--net=eth0) is required for this feature to work. Example: $ firejail --net=eth0 --netlock \ --private=~/tor-browser_en-US ./start-tor-browser.desktop --netmask=address Use this option when you want to assign an IP address in a new namespace and the parent interface specified by --net is not configured. An IP address and a default gateway address also have to be added. By default the new namespace interface comes without IP address and default gateway configured. Example: $ sudo /sbin/brctl addbr br0 $ sudo /sbin/ifconfig br0 up $ firejail --ip=10.10.20.67 --netmask=255.255.255.0 --defaultgw=10.10.20.1 --netns=name Run the program in a named, persistent network namespace. These can be created and configured using "ip netns". --netstats Monitor network namespace statistics, see MONITORING section for more details. Example: $ firejail --netstats PID User RX(KB/s) TX(KB/s) Command 1294 netblue 53.355 1.473 firejail --net=eth0 firefox 7383 netblue 9.045 0.112 firejail --net=eth0 transmission --nettrace[=name|pid] Monitor received TCP. UDP, and ICMP traffic. The sandbox can be specified by name or pid. Only networked sandboxes created with --net are supported. This option is only available when running the sandbox as root. Without a name/pid, Firejail will monitor the main system network namespace. Example: $ sudo firejail --nettrace 95 KB/s geoip 457, IP database 4436 52 KB/s *********** 64.222.84.207:443 United States 33 KB/s ******* 89.147.74.105:63930 Hungary 0 B/s 45.90.28.0:443 NextDNS 0 B/s 94.70.122.176:52309(UDP) Greece 339 B/s 104.26.7.35:443 Cloudflare If /usr/bin/geoiplookup is installed (geoip-bin package in Debian), the country the traffic originates from is added to the trace. We also use the static IP map in /usr/lib/firejail/static-ip-map to print the domain names for some of the more common websites and cloud platforms. No external services are contacted for reverse IP lookup. --nice=value Set nice value for all processes running inside the sandbox. Only root may specify a negative value. Example: $ firejail --nice=2 firefox --no3d Disable 3D hardware acceleration. Example: $ firejail --no3d firefox --noautopulse (deprecated) See --keep-config-pulse. --noblacklist=dirname_or_filename Disable blacklist for this directory or file. Example: $ firejail $ nc dict.org 2628 bash: /bin/nc: Permission denied $ exit $ firejail --noblacklist=/bin/nc $ nc dict.org 2628 220 pan.alephnull.com dictd 1.12.1/rf on Linux 3.14-1-amd64 --nodbus (deprecated) Disable D-Bus access (both system and session buses). Equivalent to --dbus-system=none --dbus-user=none. Example: $ firejail --nodbus --net=none --nodvd Disable DVD and audio CD devices. Example: $ firejail --nodvd --noinput Disable input devices. Example: $ firejail --noinput --noexec=dirname_or_filename Remount directory or file noexec, nodev and nosuid. File globbing is supported, see FILE GLOBBING section for more details. Example: $ firejail --noexec=/tmp /etc and /var are noexec by default if the sandbox was started as a regular user. --nogroups Disable supplementary groups. Without this option, supplementary groups are enabled for the user starting the sandbox. For root user supplementary groups are always disabled. Note: By default all regular user groups are removed with the exception of the current user. This can be changed using --allusers command option. Example: $ id uid=1000(netblue) gid=1000(netblue) groups=1000(netblue),24(cdrom),25(floppy),27(sudo),29(audio) $ firejail --nogroups Parent pid 8704, child pid 8705 Child process initialized $ id uid=1000(netblue) gid=1000(netblue) groups=1000(netblue) $ --nonewprivs Sets the NO_NEW_PRIVS prctl. This ensures that child processes cannot acquire new privileges using execve(2); in particular, this means that calling a suid binary (or one with file capabilities) does not result in an increase of privilege. This option is enabled by default if seccomp filter is activated. --noprinters Disable printers. --noprofile Do not use a security profile. Example: $ firejail Reading profile /etc/firejail/default.profile Parent pid 8553, child pid 8554 Child process initialized [...] $ firejail --noprofile Parent pid 8553, child pid 8554 Child process initialized [...] --noroot Install a user namespace with a single user - the current user. root user does not exist in the new namespace. This option requires a Linux kernel version 3.8 or newer. The option is not supported for --chroot and --overlay configurations, or for sandboxes started as root. Example: $ firejail --noroot Parent pid 8553, child pid 8554 Child process initialized $ ping google.com ping: icmp open socket: Operation not permitted $ --nosound Disable sound system. Example: $ firejail --nosound firefox --notv Disable DVB (Digital Video Broadcasting) TV devices. Example: $ firejail --notv vlc --nou2f Disable U2F devices. Example: $ firejail --nou2f --novideo Disable video devices. --nowhitelist=dirname_or_filename Disable whitelist for this directory or file. --oom=value Configure kernel's OutOfMemory-killer score for this sandbox. The acceptable score values are between 0 and 1000 for regular users, and -1000 to 1000 for root. For more information on OOM kernel feature see man choom. Example: $ firejail --oom=300 firefox --output=logfile stdout logging and log rotation. Copy stdout to logfile, and keep the size of the file under 500KB using log rotation. Five files with prefixes .1 to .5 are used in rotation. Example: $ firejail --output=sandboxlog /bin/bash [...] $ ls -l sandboxlog* -rw-r--r-- 1 netblue netblue 333890 Jun 2 07:48 sandboxlog -rw-r--r-- 1 netblue netblue 511488 Jun 2 07:48 sandboxlog.1 -rw-r--r-- 1 netblue netblue 511488 Jun 2 07:48 sandboxlog.2 -rw-r--r-- 1 netblue netblue 511488 Jun 2 07:48 sandboxlog.3 -rw-r--r-- 1 netblue netblue 511488 Jun 2 07:48 sandboxlog.4 -rw-r--r-- 1 netblue netblue 511488 Jun 2 07:48 sandboxlog.5 --output-stderr=logfile Similar to --output, but stderr is also stored. --private Mount new /root and /home/user directories in temporary filesystems. All modifications are discarded when the sandbox is closed. Example: $ firejail --private firefox --private=directory Use directory as user home. --private and --private=directory cannot be used together. Example: $ firejail --private=/home/netblue/firefox-home firefox Bug: Even with this enabled, some commands (such as mkdir, mkfile and private-cache) will still operate on the original home directory. Workaround: Disable the incompatible commands, such as by using "ignore mkdir" and "ignore mkfile". For details, see #903 https://github.com/netblue30/firejail/issues/903 --private-bin=file,file Build a new /bin in a temporary filesystem, and copy the programs in the list. The files in the list must be expressed as relative to the /bin, /sbin, /usr/bin, /usr/sbin, or /usr/local/bin directories. If no listed files are found, /bin directory will be empty. The same directory is also bind-mounted over /sbin, /usr/bin, /usr/sbin and /usr/local/bin. All modifications are discarded when the sandbox is closed. Multiple private- bin commands are allowed and they accumulate. File globbing is supported, see FILE GLOBBING section for more details. Example: $ firejail --private-bin=bash,sed,ls,cat Parent pid 20841, child pid 20842 Child process initialized $ ls /bin bash cat ls sed --private-cache Mount an empty temporary filesystem on top of the .cache directory in user home. All modifications are discarded when the sandbox is closed. Example: $ firejail --private-cache openbox --private-cwd Set working directory inside jail to the home directory, and failing that, the root directory. Does not impact working directory of profile include paths. Example: $ pwd /tmp $ firejail --private-cwd $ pwd /home/user --private-cwd=directory Set working directory inside the jail. Full directory path is required. Symbolic links are not allowed. Does not impact working directory of profile include paths. Example: $ pwd /tmp $ firejail --private-cwd=/opt $ pwd /opt --private-dev Create a new /dev directory. Only disc, dri, dvb, hidraw, null, full, zero, tty, pts, ptmx, random, snd, urandom, video, log, shm and usb devices are available. Use the options --no3d, --nodvd, --nosound, --notv, --nou2f and --novideo for additional restrictions. Example: $ firejail --private-dev Parent pid 9887, child pid 9888 Child process initialized $ ls /dev cdrom cdrw dri dvd dvdrw full log null ptmx pts random shm snd sr0 tty urandom zero $ --private-etc, --private-etc=file,directory,@group The files installed by --private-etc are copies of the original system files from /etc directory. By default, the command brings in a skeleton of files and directories used by most console tools: $ firejail --private-etc dig debian.org For X11/GTK/QT/Gnome/KDE programs add @x11 group as a parameter. Example: $ firejail --private-etc=@x11,gcrypt,python* gimp gcrypt and /etc/python* directories are not part of the generic @x11 group. File globbing is supported. For games, add @games group: $ firejail --private-etc=@games,@x11 warzone2100 Sound and networking files are included automatically, unless --nosound or --net=none are specified. Files for encrypted TLS/SSL protocol are in @tls-ca group. $ firejail --private-etc=@tls-ca,wgetrc wget https://debian.org Note: The easiest way to extract the list of /etc files accessed by your program is using strace utility: $ strace /usr/bin/transmission-qt 2>&1 | grep open | grep etc --private-home=file,directory Build a new user home in a temporary filesystem, and copy the files and directories in the list in the new home. The files and directories in the list must be expressed as relative to the current user's home directory. All modifications are discarded when the sandbox is closed. Example: $ firejail --private-home=.mozilla firefox --private-opt=file,directory Build a new /opt in a temporary filesystem, and copy the files and directories in the list. The files and directories in the list must be expressed as relative to the /opt directory, and must not contain the / character (e.g., /opt/foo must be expressed as foo, but /opt/foo/bar -- expressed as foo/bar -- is disallowed). If no listed file is found, /opt directory will be empty. All modifications are discarded when the sandbox is closed. Example: $ firejail --private-opt=firefox /opt/firefox/firefox --private-srv=file,directory Build a new /srv in a temporary filesystem, and copy the files and directories in the list. The files and directories in the list must be expressed as relative to the /srv directory, and must not contain the / character (e.g., /srv/foo must be expressed as foo, but /srv/foo/bar -- expressed as srv/bar -- is disallowed). If no listed file is found, /srv directory will be empty. All modifications are discarded when the sandbox is closed. Example: # firejail --private-srv=www /etc/init.d/apache2 start --private-tmp Mount an empty temporary filesystem on top of /tmp directory whitelisting X11 and PulseAudio sockets. Example: $ firejail --private-tmp $ ls -al /tmp drwxrwxrwt 4 nobody nogroup 80 Apr 30 11:46 . drwxr-xr-x 30 nobody nogroup 4096 Apr 26 22:18 .. drwx------ 2 nobody nogroup 4096 Apr 30 10:52 pulse- PKdhtXMmr18n drwxrwxrwt 2 nobody nogroup 4096 Apr 30 10:52 .X11-unix --profile=filename_or_profilename Load a custom security profile from filename. For filename use an absolute path or a path relative to the current path. For more information, see SECURITY PROFILES section below. Example: $ firejail --profile=myprofile --profile.print=name|pid Print the name of the profile file for the sandbox identified by name or or PID. Example: $ firejail --profile.print=browser /etc/firejail/firefox.profile --protocol=protocol,protocol,protocol Enable protocol filter. The filter is based on seccomp and checks the first argument to socket system call. Recognized values: unix, inet, inet6, netlink, packet, and bluetooth. This option is not supported for i386 architecture. Multiple protocol commands are allowed and they accumulate. Example: $ firejail --protocol=unix,inet,inet6 firefox --protocol.print=name|pid Print the protocol filter for the sandbox identified by name or PID. Example: $ firejail --name=mybrowser firefox & $ firejail --protocol.print=mybrowser unix,inet,inet6,netlink Example: $ firejail --list 3272:netblue::firejail --private firefox $ firejail --protocol.print=3272 unix,inet,inet6,netlink --put=name|pid src-filename dest-filename Put a file in sandbox container, see FILE TRANSFER section for more details. --quiet Turn off Firejail's output. The same effect can be obtained by setting an environment variable FIREJAIL_QUIET to yes. --read-only=dirname_or_filename Set directory or file read-only. File globbing is supported, see FILE GLOBBING section for more details. Example: $ firejail --read-only=~/.mozilla firefox --read-write=dirname_or_filename Set directory or file read-write. Only files or directories belonging to the current user are allowed for this operation. File globbing is supported, see FILE GLOBBING section for more details. Example: $ mkdir ~/test $ touch ~/test/a $ firejail --read-only=~/test --read-write=~/test/a --restrict-namespaces Install a seccomp filter that blocks attempts to create new cgroup, ipc, net, mount, pid, time, user or uts namespaces. Example: $ firejail --restrict-namespaces --restrict-namespaces=cgroup,ipc,net,mnt,pid,time,user,uts Install a seccomp filter that blocks attempts to create any of the specified namespaces. The filter examines the arguments of clone, unshare and setns system calls and returns error EPERM to the process (or kills it or logs the attempt, see --seccomp-error-action below) if necessary. Note that the filter is not able to examine the arguments of clone3 system calls, and always responds to these calls with error ENOSYS. Example: $ firejail --restrict-namespaces=user,net --rlimit-as=number Set the maximum size of the process's virtual memory (address space) in bytes. Use k(ilobyte), m(egabyte) or g(igabyte) for size suffix (base 1024). --rlimit-cpu=number Set the maximum limit, in seconds, for the amount of CPU time each sandboxed process can consume. When the limit is reached, the processes are killed. The CPU limit is a limit on CPU seconds rather than elapsed time. CPU seconds is basically how many seconds the CPU has been in use and does not necessarily directly relate to the elapsed time. Linux kernel keeps track of CPU seconds for each process independently. --rlimit-fsize=number Set the maximum file size that can be created by a process. Use k(ilobyte), m(egabyte) or g(igabyte) for size suffix (base 1024). --rlimit-nofile=number Set the maximum number of files that can be opened by a process. --rlimit-nproc=number Set the maximum number of processes that can be created for the real user ID of the calling process. --rlimit-sigpending=number Set the maximum number of pending signals for a process. --rmenv=name Remove environment variable in the new sandbox. Example: $ firejail --rmenv=DBUS_SESSION_BUS_ADDRESS --scan ARP-scan all the networks from inside a network namespace. This makes it possible to detect macvlan kernel device drivers running on the current host. Example: $ firejail --net=eth0 --scan --seccomp Enable seccomp filter and blacklist the syscalls in the default list, which is @default-nodebuggers unless --allow-debuggers is specified, then it is @default. To help creating useful seccomp filters more easily, the following system call groups are defined: @aio, @basic-io, @chown, @clock, @cpu-emulation, @debug, @default, @default-nodebuggers, @default-keep, @file-system, @io- event, @ipc, @keyring, @memlock, @module, @mount, @network-io, @obsolete, @privileged, @process, @raw-io, @reboot, @resources, @setuid, @swap, @sync, @system- service and @timer. More information about groups can be found in /usr/share/doc/firejail/syscalls.txt The default list can be customized, see --seccomp= for a description. It can be customized also globally in /etc/firejail/firejail.config file. System architecture is strictly imposed only if flag --seccomp.block-secondary is used. The filter is applied at run time only if the correct architecture was detected. For the case of I386 and AMD64 both 32-bit and 64-bit filters are installed. Firejail will print seccomp violations to the audit log if the kernel was compiled with audit support (CONFIG_AUDIT flag). Example: $ firejail --seccomp --seccomp=syscall,@group,!syscall2 Enable seccomp filter, blacklist the default list and the syscalls or syscall groups specified by the command, but don't blacklist "syscall2". On a 64 bit architecture, an additional filter for 32 bit system calls can be installed with --seccomp.32. Example: $ firejail --seccomp=utime,utimensat,utimes firefox $ firejail --seccomp=@clock,mkdir,unlinkat transmission- gtk $ firejail '--seccomp=@ipc,!pipe,!pipe2' audacious Syscalls can be specified by their number if prefix $ is added, so for example $165 would be equal to mount on i386. Instead of dropping the syscall by returning EPERM, another error number can be returned using syscall:errno syntax. This can be also changed globally with --seccomp- error-action or in /etc/firejail/firejail.config file. The process can also be killed by using syscall:kill syntax, or the attempt may be logged with syscall:log. Example: $ firejail --seccomp=unlinkat:ENOENT,utimensat,utimes Parent pid 10662, child pid 10663 Child process initialized $ touch testfile $ ls testfile testfile $ rm testfile rm: cannot remove `testfile': No such file or directory If the blocked system calls would also block Firejail from operating, they are handled by adding a preloaded library which performs seccomp system calls later. However, this is incompatible with 32 bit seccomp filters. Example: $ firejail --noprofile --seccomp=execve sh Parent pid 32751, child pid 32752 Post-exec seccomp protector enabled list in: execve, check list: @default-keep prelist: (null), postlist: execve Child process initialized in 46.44 ms $ ls Operation not permitted --seccomp.block-secondary Enable seccomp filter and filter system call architectures so that only the native architecture is allowed. For example, on amd64, i386 and x32 system calls are blocked as well as changing the execution domain with personality(2) system call. --seccomp.drop=syscall,@group Enable seccomp filter, and blacklist the syscalls or the syscall groups specified by the command. On a 64 bit architecture, an additional filter for 32 bit system calls can be installed with --seccomp.32.drop. Example: $ firejail --seccomp.drop=utime,utimensat,utimes,@clock Instead of dropping the syscall by returning EPERM, another error number can be returned using syscall:errno syntax. This can be also changed globally with --seccomp- error-action or in /etc/firejail/firejail.config file. The process can also be killed by using syscall:kill syntax, or the attempt may be logged with syscall:log. Example: $ firejail --seccomp.drop=unlinkat:ENOENT,utimensat,utimes Parent pid 10662, child pid 10663 Child process initialized $ touch testfile $ ls testfile testfile $ rm testfile rm: cannot remove `testfile': No such file or directory --seccomp.keep=syscall,@group,!syscall2 Enable seccomp filter, blacklist all syscall not listed and "syscall2". The system calls needed by Firejail (group @default-keep: prctl, execve, execveat) are handled with the preload library. On a 64 bit architecture, an additional filter for 32 bit system calls can be installed with --seccomp.32.keep. Example: $ firejail --seccomp.keep=poll,select,[...] transmission- gtk --seccomp.print=name|pid Print the seccomp filter for the sandbox identified by name or PID. Example: $ firejail --name=browser firefox & $ firejail --seccomp.print=browser line OP JT JF K ================================= 0000: 20 00 00 00000004 ld data.architecture 0001: 15 01 00 c000003e jeq ARCH_64 0003 (false 0002) 0002: 06 00 00 7fff0000 ret ALLOW 0003: 20 00 00 00000000 ld data.syscall-number 0004: 35 01 00 40000000 jge X32_ABI true:0006 (false 0005) 0005: 35 01 00 00000000 jge read 0007 (false 0006) 0006: 06 00 00 00050001 ret ERRNO(1) 0007: 15 41 00 0000009a jeq modify_ldt 0049 (false 0008) 0008: 15 40 00 000000d4 jeq lookup_dcookie 0049 (false 0009) 0009: 15 3f 00 0000012a jeq perf_event_open 0049 (false 000a) 000a: 15 3e 00 00000137 jeq process_vm_writev 0049 (false 000b) 000b: 15 3d 00 0000009c jeq _sysctl 0049 (false 000c) 000c: 15 3c 00 000000b7 jeq afs_syscall 0049 (false 000d) 000d: 15 3b 00 000000ae jeq create_module 0049 (false 000e) 000e: 15 3a 00 000000b1 jeq get_kernel_syms 0049 (false 000f) 000f: 15 39 00 000000b5 jeq getpmsg 0049 (false 0010) 0010: 15 38 00 000000b6 jeq putpmsg 0049 (false 0011) 0011: 15 37 00 000000b2 jeq query_module 0049 (false 0012) 0012: 15 36 00 000000b9 jeq security 0049 (false 0013) 0013: 15 35 00 0000008b jeq sysfs 0049 (false 0014) 0014: 15 34 00 000000b8 jeq tuxcall 0049 (false 0015) 0015: 15 33 00 00000086 jeq uselib 0049 (false 0016) 0016: 15 32 00 00000088 jeq ustat 0049 (false 0017) 0017: 15 31 00 000000ec jeq vserver 0049 (false 0018) 0018: 15 30 00 0000009f jeq adjtimex 0049 (false 0019) 0019: 15 2f 00 00000131 jeq clock_adjtime 0049 (false 001a) 001a: 15 2e 00 000000e3 jeq clock_settime 0049 (false 001b) 001b: 15 2d 00 000000a4 jeq settimeofday 0049 (false 001c) 001c: 15 2c 00 000000b0 jeq delete_module 0049 (false 001d) 001d: 15 2b 00 00000139 jeq finit_module 0049 (false 001e) 001e: 15 2a 00 000000af jeq init_module 0049 (false 001f) 001f: 15 29 00 000000ad jeq ioperm 0049 (false 0020) 0020: 15 28 00 000000ac jeq iopl 0049 (false 0021) 0021: 15 27 00 000000f6 jeq kexec_load 0049 (false 0022) 0022: 15 26 00 00000140 jeq kexec_file_load 0049 (false 0023) 0023: 15 25 00 000000a9 jeq reboot 0049 (false 0024) 0024: 15 24 00 000000a7 jeq swapon 0049 (false 0025) 0025: 15 23 00 000000a8 jeq swapoff 0049 (false 0026) 0026: 15 22 00 000000a3 jeq acct 0049 (false 0027) 0027: 15 21 00 00000141 jeq bpf 0049 (false 0028) 0028: 15 20 00 000000a1 jeq chroot 0049 (false 0029) 0029: 15 1f 00 000000a5 jeq mount 0049 (false 002a) 002a: 15 1e 00 000000b4 jeq nfsservctl 0049 (false 002b) 002b: 15 1d 00 0000009b jeq pivot_root 0049 (false 002c) 002c: 15 1c 00 000000ab jeq setdomainname 0049 (false 002d) 002d: 15 1b 00 000000aa jeq sethostname 0049 (false 002e) 002e: 15 1a 00 000000a6 jeq umount2 0049 (false 002f) 002f: 15 19 00 00000099 jeq vhangup 0049 (false 0030) 0030: 15 18 00 000000ee jeq set_mempolicy 0049 (false 0031) 0031: 15 17 00 00000100 jeq migrate_pages 0049 (false 0032) 0032: 15 16 00 00000117 jeq move_pages 0049 (false 0033) 0033: 15 15 00 000000ed jeq mbind 0049 (false 0034) 0034: 15 14 00 00000130 jeq open_by_handle_at 0049 (false 0035) 0035: 15 13 00 0000012f jeq name_to_handle_at 0049 (false 0036) 0036: 15 12 00 000000fb jeq ioprio_set 0049 (false 0037) 0037: 15 11 00 00000067 jeq syslog 0049 (false 0038) 0038: 15 10 00 0000012c jeq fanotify_init 0049 (false 0039) 0039: 15 0f 00 00000138 jeq kcmp 0049 (false 003a) 003a: 15 0e 00 000000f8 jeq add_key 0049 (false 003b) 003b: 15 0d 00 000000f9 jeq request_key 0049 (false 003c) 003c: 15 0c 00 000000fa jeq keyctl 0049 (false 003d) 003d: 15 0b 00 000000ce jeq io_setup 0049 (false 003e) 003e: 15 0a 00 000000cf jeq io_destroy 0049 (false 003f) 003f: 15 09 00 000000d0 jeq io_getevents 0049 (false 0040) 0040: 15 08 00 000000d1 jeq io_submit 0049 (false 0041) 0041: 15 07 00 000000d2 jeq io_cancel 0049 (false 0042) 0042: 15 06 00 000000d8 jeq remap_file_pages 0049 (false 0043) 0043: 15 05 00 00000116 jeq vmsplice 0049 (false 0044) 0044: 15 04 00 00000087 jeq personality 0049 (false 0045) 0045: 15 03 00 00000143 jeq userfaultfd 0049 (false 0046) 0046: 15 02 00 00000065 jeq ptrace 0049 (false 0047) 0047: 15 01 00 00000136 jeq process_vm_readv 0049 (false 0048) 0048: 06 00 00 7fff0000 ret ALLOW 0049: 06 00 01 00000000 ret KILL $ --seccomp-error-action= kill | ERRNO | log By default, if a seccomp filter blocks a system call, the process gets EPERM as the error. With --seccomp-error- action=error, another error number can be returned, for example ENOSYS or EACCES. The process can also be killed (like in versions <0.9.63 of Firejail) by using --seccomp- error-action=kill syntax, or the attempt may be logged with --seccomp-error-action=log. Not killing the process weakens Firejail slightly when trying to contain intrusion, but it may also allow tighter filters if the only alternative is to allow a system call. --shutdown=name|pid Shutdown the sandbox identified by name or PID. Example: $ firejail --name=mygame --caps.drop=all warzone2100 & $ firejail --shutdown=mygame Example: $ firejail --list 3272:netblue::firejail --private firefox $ firejail --shutdown=3272 --snitrace[=name|pid] Monitor Server Name Indication (TLS/SNI). The sandbox can be specified by name or pid. Only networked sandboxes created with --net are supported. This option is only available when running the sandbox as root. Without a name/pid, Firejail will monitor the main system network namespace. Example: $ sudo firejail --snitrace 07:49:51 23.185.0.3 linux.com 07:49:51 23.185.0.3 www.linux.com 07:50:05 192.0.73.2 secure.gravatar.com 07:52:35 172.67.68.93 www.howtoforge.com 07:52:37 13.225.103.59 sf.ezoiccdn.com 07:52:42 142.250.176.3 www.gstatic.com 07:53:03 173.236.250.32 www.linuxlinks.com 07:53:05 192.0.77.37 c0.wp.com 07:53:08 192.0.78.32 jetpack.wordpress.com 07:53:09 192.0.77.32 s0.wp.com 07:53:09 192.0.77.2 i0.wp.com 07:53:10 192.0.77.2 i0.wp.com 07:53:11 192.0.73.2 1.gravatar.com --tab Enable shell tab completion in sandboxes using private or whitelisted home directories. $ firejail --private --tab --timeout=hh:mm:ss Kill the sandbox automatically after the time has elapsed. The time is specified in hours/minutes/seconds format. $ firejail --timeout=01:30:00 firefox --tmpfs=dirname Mount a writable tmpfs filesystem on directory dirname. Directories outside user home or not owned by the user are not allowed. Sandboxes running as root are exempt from these restrictions. File globbing is supported, see FILE GLOBBING section for more details. Example: $ firejail --tmpfs=~/.local/share --top Monitor the most CPU-intensive sandboxes, see MONITORING section for more details. Example: $ firejail --top --trace[=filename] Trace open, access and connect system calls. If filename is specified, log trace output to filename, otherwise log to console. Example: $ firejail --trace wget -q www.debian.org Reading profile /etc/firejail/wget.profile 3:wget:fopen64 /etc/wgetrc:0x5c8e8ce6c0 3:wget:fopen /etc/hosts:0x5c8e8cfb70 3:wget:socket AF_INET SOCK_DGRAM IPPROTO_IP:3 3:wget:connect 3 8.8.8.8 port 53:0 3:wget:socket AF_INET SOCK_STREAM IPPROTO_IP:3 3:wget:connect 3 130.89.148.14 port 80:0 3:wget:fopen64 index.html:0x5c8e8d1a60 parent is shutting down, bye... --tracelog This option enables auditing blacklisted files and directories. A message is sent to syslog in case the file or the directory is accessed. Example: $ firejail --tracelog firefox Sample messages: $ sudo tail -f /var/log/syslog [...] Dec 3 11:43:25 debian firejail[70]: blacklist violation - sandbox 26370, exe firefox, syscall open64, path /etc/shadow Dec 3 11:46:17 debian firejail[70]: blacklist violation - sandbox 26370, exe firefox, syscall opendir, path /boot [...] Note: Support for this command is controlled in firejail.config with the tracelog option. --tree Print a tree of all sandboxed processes, see MONITORING section for more details. Example: $ firejail --tree 11903:netblue:firejail iceweasel 11904:netblue:iceweasel 11957:netblue:/usr/lib/iceweasel/plugin-container 11969:netblue:firejail --net=eth0 transmission-gtk 11970:netblue:transmission-gtk --version Print program version/compile time support and exit. Example: $ firejail --version firejail version 0.9.27 Compile time support: - AppArmor support is enabled - AppImage support is enabled - chroot support is enabled - file and directory whitelisting support is enabled - file transfer support is enabled - firetunnel support is enabled - networking support is enabled - overlayfs support is enabled - private-home support is enabled - seccomp-bpf support is enabled - user namespace support is enabled - X11 sandboxing support is enabled --veth-name=name Use this name for the interface connected to the bridge for --net=bridge_interface commands, instead of the default one. Example: $ firejail --net=br0 --veth-name=if0 --whitelist=dirname_or_filename Whitelist directory or file. A temporary file system is mounted on the top directory, and the whitelisted files are mount-binded inside. Modifications to whitelisted files are persistent, everything else is discarded when the sandbox is closed. The top directory can be all directories in / (except /proc and /sys), /sys/module, /run/user/$UID, $HOME and all directories in /usr. Symbolic link handling: Whitelisting a path that is a symbolic link will also whitelist the path that it points to. For example, if ~/foo is whitelisted and it points to ~/bar, then ~/bar will also be whitelisted. Restrictions: With the exception of the user home directory, both the link and the real file should be in the same top directory. For symbolic links in the user home directory, both the link and the real file should be owned by the user. File globbing is supported, see FILE GLOBBING section for more details. Example: $ firejail --noprofile --whitelist=~/.mozilla $ firejail --whitelist=/tmp/.X11-unix --whitelist=/dev/null $ firejail "--whitelist=/home/username/My Virtual Machines" $ firejail --whitelist=~/work* --whitelist=/var/backups* --writable-etc Mount /etc directory read-write. Example: $ sudo firejail --writable-etc --writable-run-user Disable the default blacklisting of /run/user/$UID/systemd and /run/user/$UID/gnupg. Example: $ sudo firejail --writable-run-user --writable-var Mount /var directory read-write. Example: $ sudo firejail --writable-var --writable-var-log Use the real /var/log directory, not a clone. By default, a tmpfs is mounted on top of /var/log directory, and a skeleton filesystem is created based on the original /var/log. Example: $ sudo firejail --writable-var-log --x11 Sandbox the application using Xpra, Xephyr, Xvfb or Xorg security extension. The sandbox will prevent screenshot and keylogger applications started inside the sandbox from accessing clients running outside the sandbox. Firejail will try Xpra first, and if Xpra is not installed on the system, it will try to find Xephyr. If all fails, Firejail will not attempt to use Xvfb or X11 security extension. Xpra, Xephyr and Xvfb modes require a network namespace to be instantiated in order to disable X11 abstract Unix socket. If this is not possible, the user can disable the abstract socket by adding "-nolisten local" on Xorg command line at system level. Example: $ firejail --x11 --net=eth0 firefox --x11=none Blacklist /tmp/.X11-unix directory, ${HOME}/.Xauthority and the file specified in ${XAUTHORITY} environment variable. Remove DISPLAY and XAUTHORITY environment variables. Stop with error message if X11 abstract socket will be accessible in jail. --x11=xephyr Start Xephyr and attach the sandbox to this server. Xephyr is a display server implementing the X11 display server protocol. A network namespace needs to be instantiated in order to deny access to X11 abstract Unix domain socket. Xephyr runs in a window just like any other X11 application. The default window size is 800x600. This can be modified in /etc/firejail/firejail.config file. The recommended way to use this feature is to run a window manager inside the sandbox. A security profile for OpenBox is provided. Xephyr is developed by Xorg project. On Debian platforms it is installed with the command sudo apt-get install xserver-xephyr. This feature is not available when running as root. Example: $ firejail --x11=xephyr --net=eth0 openbox --x11=xorg Sandbox the application using the untrusted mode implemented by X11 security extension. The extension is available in Xorg package and it is installed by default on most Linux distributions. It provides support for a simple trusted/untrusted connection model. Untrusted clients are restricted in certain ways to prevent them from reading window contents of other clients, stealing input events, etc. The untrusted mode has several limitations. A lot of regular programs assume they are a trusted X11 clients and will crash or lock up when run in untrusted mode. Chromium browser and xterm are two examples. Firefox and transmission-gtk seem to be working fine. A network namespace is not required for this option. Example: $ firejail --x11=xorg firefox --x11=xpra Start Xpra (https://xpra.org) and attach the sandbox to this server. Xpra is a persistent remote display server and client for forwarding X11 applications and desktop screens. A network namespace needs to be instantiated in order to deny access to X11 abstract Unix domain socket. On Debian platforms Xpra is installed with the command sudo apt-get install xpra. This feature is not available when running as root. Example: $ firejail --x11=xpra --net=eth0 firefox --x11=xvfb Start Xvfb X11 server and attach the sandbox to this server. Xvfb, short for X virtual framebuffer, performs all graphical operations in memory without showing any screen output. Xvfb is mainly used for remote access and software testing on headless servers. On Debian platforms Xvfb is installed with the command sudo apt-get install xvfb. This feature is not available when running as root. Example: remote VNC access On the server we start a sandbox using Xvfb and openbox window manager. The default size of Xvfb screen is 800x600 - it can be changed in /etc/firejail/firejail.config (xvfb-screen). Some sort of networking (--net) is required in order to isolate the abstract sockets used by other X servers. $ firejail --net=none --x11=xvfb openbox *** Attaching to Xvfb display 792 *** Reading profile /etc/firejail/openbox.profile Reading profile /etc/firejail/disable-common.inc Reading profile /etc/firejail/disable-common.local Parent pid 5400, child pid 5401 On the server we also start a VNC server and attach it to the display handled by our Xvfb server (792). $ x11vnc -display :792 On the client machine we start a VNC viewer and use it to connect to our server: $ vncviewer --xephyr-screen=WIDTHxHEIGHT Set screen size for --x11=xephyr. The setting will overwrite the default set in /etc/firejail/firejail.config for the current sandbox. Run xrandr to get a list of supported resolutions on your computer. Example: $ firejail --net=eth0 --x11=xephyr --xephyr-screen=640x480 firefox NAME top VALIDATION For simplicity, the same name validation is used for multiple options. Rules: The name must be 1-253 characters long. The name can only contain ASCII letters, digits and the special characters "-._" (that is, the name cannot contain spaces or control characters). The name cannot contain only digits. The first and last characters must be an ASCII letter or digit and the name may contain special characters in the middle. DESKTOP INTEGRATION top A symbolic link to /usr/bin/firejail under the name of a program, will start the program in Firejail sandbox. The symbolic link should be placed in the first $PATH position. On most systems, a good place is /usr/local/bin directory. Example: Make a firefox symlink to /usr/bin/firejail: $ sudo ln -s /usr/bin/firejail /usr/local/bin/firefox Verify $PATH $ which -a firefox /usr/local/bin/firefox /usr/bin/firefox Starting firefox in this moment, automatically invokes firejail firefox. This works for clicking on desktop environment icons, menus etc. Use "firejail --tree" to verify the program is sandboxed. $ firejail --tree 1189:netblue:firejail firefox 1190:netblue:firejail firefox 1220:netblue:/bin/sh -c "/usr/lib/firefox/firefox" 1221:netblue:/usr/lib/firefox/firefox We provide a tool that automates all this integration, please see firecfg(1) for more details. EXAMPLES top firejail Sandbox a regular shell session. firejail firefox Start Mozilla Firefox. firejail --debug firefox Debug Firefox sandbox. firejail --private firefox Start Firefox with a new, empty home directory. firejail --net=none vlc Start VLC in an unconnected network namespace. firejail --net=eth0 firefox Start Firefox in a new network namespace. An IP address is assigned automatically. firejail --net=br0 --ip=10.10.20.5 --net=br1 --net=br2 Start a shell session in a new network namespace and connect it to br0, br1, and br2 host bridge devices. IP addresses are assigned automatically for the interfaces connected to br1 and b2 firejail --list List all sandboxed processes. FILE GLOBBING top Globbing is the operation that expands a wildcard pattern into the list of pathnames matching the pattern. This pattern is matched at firejail start, and is NOT UPDATED at runtime. Files matching a blacklist, but created after firejail start will be accessible within the jail. Matching is defined by: - '?' matches any character - '*' matches any string - '[' denotes a range of characters The globbing feature is implemented using glibc glob command. For more information on the wildcard syntax see man 7 glob. The following command line options are supported: --blacklist, --private-bin, --noexec, --read-only, --read-write, --tmpfs, and --whitelist. Examples: $ firejail --private-bin=sh,bash,python* $ firejail --blacklist=~/dir[1234] $ firejail --read-only=~/dir[1-4] FILE TRANSFER top These features allow the user to inspect the filesystem container of an existing sandbox and transfer files between the container and the host filesystem. --cat=name|pid filename Write content of a container file to standard out. The container is specified by name or PID. If standard out is a terminal, all ASCII control characters except new line and horizontal tab are replaced. --get=name|pid filename Retrieve the container file and store it on the host in the current working directory. The container is specified by name or PID. --ls=name|pid dir_or_filename List container files. The container is specified by name or PID. --put=name|pid src-filename dest-filename Put src-filename in sandbox container. The container is specified by name or PID. Examples: $ firejail --name=mybrowser --private firefox $ firejail --ls=mybrowser ~/Downloads drwxr-xr-x netblue netblue 4096 . drwxr-xr-x netblue netblue 4096 .. -rw-r--r-- netblue netblue 7847 x11-x305.png -rw-r--r-- netblue netblue 6800 x11-x642.png -rw-r--r-- netblue netblue 34139 xpra- clipboard.png $ firejail --get=mybrowser ~/Downloads/xpra-clipboard.png $ firejail --put=mybrowser xpra-clipboard.png ~/Downloads/xpra-clipboard.png $ firejail --cat=mybrowser ~/.bashrc MONITORING top Option --list prints a list of all sandboxes. The format for each process entry is as follows: PID:USER:Sandbox Name:Command Option --tree prints the tree of processes running in the sandbox. The format for each process entry is as follows: PID:USER:Sandbox Name:Command Option --top is similar to the UNIX top command, however it applies only to sandboxes. Option --netstats prints network statistics for active sandboxes installing new network namespaces. Listed below are the available fields (columns) in alphabetical order for --top and --netstats options: Command Command used to start the sandbox. CPU% CPU usage, the sandbox share of the elapsed CPU time since the last screen update PID Unique process ID for the task controlling the sandbox. Prcs Number of processes running in sandbox, including the controlling process. RES Resident Memory Size (KiB), sandbox non-swapped physical memory. It is a sum of the RES values for all processes running in the sandbox. RX(KB/s) Network receive speed. Sandbox Name The name of the sandbox, if any. SHR Shared Memory Size (KiB), it reflects memory shared with other processes. It is a sum of the SHR values for all processes running in the sandbox, including the controlling process. TX(KB/s) Network transmit speed. Uptime Sandbox running time in hours:minutes:seconds format. USER The owner of the sandbox. RESTRICTED SHELL top To configure a restricted shell, replace /bin/bash with /usr/bin/firejail in /etc/passwd file for each user that needs to be restricted. Alternatively, you can specify /usr/bin/firejail in adduser command: adduser --shell /usr/bin/firejail username Additional arguments passed to firejail executable upon login are declared in /etc/firejail/login.users file. SECURITY PROFILES top Several command line options can be passed to the program using profile files. Firejail chooses the profile file as follows: 1. If a profile file is provided by the user with --profile=FILE option, the profile FILE is loaded. If a profile name is given, it is searched for first in the ~/.config/firejail directory and if not found then in /etc/firejail directory. Profile names do not include the .profile suffix. If there is a file with the same name as the given profile name, it will be used instead of doing the profile search. To force a profile search, prefix the profile name with a colon (:), eg. --profile=:PROFILE_NAME. Example: $ firejail --profile=/home/netblue/icecat.profile icecat Reading profile /home/netblue/icecat.profile [...] $ firejail --profile=icecat icecat-wrapper.sh Reading profile /etc/firejail/icecat.profile [...] 2. If a profile file with the same name as the application is present in ~/.config/firejail directory or in /etc/firejail, the profile is loaded. ~/.config/firejail takes precedence over /etc/firejail. Example: $ firejail icecat Command name #icecat# Found icecat profile in /home/netblue/.config/firejail directory Reading profile /home/netblue/.config/firejail/icecat.profile [...] 3. Use default.profile file if the sandbox is started by a regular user, or server.profile file if the sandbox is started by root. Firejail looks for these files in ~/.config/firejail directory, followed by /etc/firejail directory. To disable default profile loading, use --noprofile command option. Example: $ firejail Reading profile /etc/firejail/default.profile Parent pid 8553, child pid 8554 Child process initialized [...] $ firejail --noprofile Parent pid 8553, child pid 8554 Child process initialized [...] See man 5 firejail-profile for profile file syntax information. TRAFFIC SHAPING top Network bandwidth is an expensive resource shared among all sandboxes running on a system. Traffic shaping allows the user to increase network performance by controlling the amount of data that flows into and out of the sandboxes. Firejail implements a simple rate-limiting shaper based on Linux command tc. The shaper works at sandbox level, and can be used only for sandboxes configured with new network namespaces. Set rate-limits: $ firejail --bandwidth=name|pid set network download upload Clear rate-limits: $ firejail --bandwidth=name|pid clear network Status: $ firejail --bandwidth=name|pid status where: name - sandbox name pid - sandbox pid network - network interface as used by --net option download - download speed in KB/s (kilobyte per second) upload - upload speed in KB/s (kilobyte per second) Example: $ firejail --name=mybrowser --net=eth0 firefox & $ firejail --bandwidth=mybrowser set eth0 80 20 $ firejail --bandwidth=mybrowser status $ firejail --bandwidth=mybrowser clear eth0 LICENSE top This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Homepage: https://firejail.wordpress.com SEE ALSO top firemon(1), firecfg(1), firejail-profile(5), firejail-login(5), firejail-users(5), jailcheck(1) https://github.com/netblue30/firejail/wiki, https://github.com/netblue30/firejail COLOPHON top This page is part of the Firejail (Firejail security sandbox) project. Information about the project can be found at https://firejail.wordpress.com. If you have a bug report for this manual page, see https://firejail.wordpress.com/support/. This page was obtained from the project's upstream Git repository https://github.com/netblue30/firejail.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-21.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 0.9.73 Jun 2023 FIREJAIL(1) Pages that refer to this page: firecfg(1), firemon(1), jailcheck(1), firejail-login(5), firejail-profile(5), firejail-users(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# firejail\n\n> Securely sandboxes processes to containers using built-in Linux capabilities.\n> More information: <https://manned.org/firejail>.\n\n- Integrate firejail with your desktop environment:\n\n`sudo firecfg`\n\n- Open a restricted Mozilla Firefox:\n\n`firejail {{firefox}}`\n\n- Start a restricted Apache server on a known interface and address:\n\n`firejail --net={{eth0}} --ip={{192.168.1.244}} {{/etc/init.d/apache2}} {{start}}`\n\n- List running sandboxes:\n\n`firejail --list`\n\n- List network activity from running sandboxes:\n\n`firejail --netstats`\n\n- Shutdown a running sandbox:\n\n`firejail --shutdown={{7777}}`\n\n- Run a restricted Firefox session to browse the internet:\n\n`firejail --seccomp --private --private-dev --private-tmp --protocol=inet firefox --new-instance --no-remote --safe-mode --private-window`\n\n- Use custom hosts file (overriding `/etc/hosts` file):\n\n`firejail --hosts-file={{~/myhosts}} {{curl http://mysite.arpa}}`\n
fixfiles
fixfiles(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fixfiles(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ARGUMENTS | AUTHOR | SEE ALSO | COLOPHON fixfiles(8) fixfiles(8) NAME top fixfiles - fix file SELinux security contexts. SYNOPSIS top fixfiles [-v] [-F] [-M] [-f] [-T nthreads] relabel fixfiles [-v] [-F] [-T nthreads] { check | restore | verify } dir/file ... fixfiles [-v] [-F] [-B | -N time ] [-T nthreads] { check | restore | verify } fixfiles [-v] [-F] [-T nthreads] -R rpmpackagename[,rpmpackagename...] { check | restore | verify } fixfiles [-v] [-F] [-T nthreads] -C PREVIOUS_FILECONTEXT { check | restore | verify } fixfiles [-F] [-M] [-B] [-T nthreads] onboot DESCRIPTION top This manual page describes the fixfiles script. This script is primarily used to correct the security context database (extended attributes) on filesystems. It can also be run at any time to relabel when adding support for new policy, or just check whether the file contexts are all as you expect. By default it will relabel all mounted ext2, ext3, ext4, gfs2, xfs, jfs and btrfs file systems as long as they do not have a security context mount option. You can use the -R flag to use rpmpackages as an alternative. The file /etc/selinux/fixfiles_exclude_dirs can contain a list of directories excluded from relabeling. fixfiles onboot will setup the machine to relabel on the next reboot. OPTIONS top -B If specified with onboot, this fixfiles will record the current date in the /.autorelabel file, so that it can be used later to speed up labeling. If used with restore, the restore will only affect files that were modified today. -F Force reset of context to match file_context for customizable files -f Clear /tmp directory with out prompt for removal. -R rpmpackagename[,rpmpackagename...] Use the rpm database to discover all files within the specified packages and restore the file contexts. -C PREVIOUS_FILECONTEXT Run a diff on the PREVIOUS_FILECONTEXT file to the currently installed one, and restore the context of all affected files. -N time Only act on files created after the specified date. Date must be specified in "YYYY-MM-DD HH:MM" format. Date field will be passed to find --newermt command. -M Bind mount filesystems before relabeling them, this allows fixing the context of files or directories that have been mounted over. -v Modify verbosity from progress to verbose. (Run restorecon with -v instead of -p) -T nthreads Use parallel relabeling, see setfiles(8) ARGUMENTS top One of: check | verify print any incorrect file context labels, showing old and new context, but do not change them. restore change any incorrect file context labels. relabel Prompt for removal of contents of /tmp directory and then change any incorrect file context labels to match the install file_contexts file. [[dir/file] ... ] List of files or directories trees that you wish to check file context on. AUTHOR top This man page was written by Richard Hally <rhally@mindspring.com>. The script was written by Dan Walsh <dwalsh@redhat.com> SEE ALSO top setfiles(8), restorecon(8) COLOPHON top This page is part of the selinux (Security-Enhanced Linux user- space libraries and tools) project. Information about the project can be found at https://github.com/SELinuxProject/selinux/wiki. If you have a bug report for this manual page, see https://github.com/SELinuxProject/selinux/wiki/Contributing. This page was obtained from the project's upstream Git repository https://github.com/SELinuxProject/selinux on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-05-11.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 2002031409 fixfiles(8) Pages that refer to this page: selinux_config(5), restorecon(8), selinux(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fixfiles\n\n> Fix file SELinux security contexts.\n> More information: <https://manned.org/fixfiles>.\n\n- If specified with onboot, this fixfiles will record the current date in the `/.autorelabel` file, so that it can be used later to speed up labeling. If used with restore, the restore will only affect files that were modified today:\n\n`fixfiles -B`\n\n- [F]orce reset of context to match `file_context` for customizable files:\n\n`fixfiles -F`\n\n- Clear `/tmp` directory without confirmation:\n\n`fixfiles -f`\n\n- Use the [R]pm database to discover all files within specific packages and restore the file contexts:\n\n`fixfiles -R {{rpm_package1,rpm_package2 ...}}`\n\n- Run a diff on the `PREVIOUS_FILECONTEXT` file to the [C]urrently installed one, and restore the context of all affected files:\n\n`fixfiles -C PREVIOUS_FILECONTEXT`\n\n- Only act on files created after a specific date which will be passed to find `--newermt` command:\n\n`fixfiles -N {{YYYY-MM-DD HH:MM}}`\n\n- Bind [M]ount filesystems before relabeling them, this allows fixing the context of files or directories that have been mounted over:\n\n`fixfiles -M`\n\n- Modify [v]erbosity from progress to verbose and run `restorecon` with `-v` instead of `-p`:\n\n`fixfiles -v`\n
flatpak
flatpak(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training flatpak(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | COMMANDS | FILE FORMATS | ENVIRONMENT | SEE ALSO | COLOPHON FLATPAK(1) flatpak FLATPAK(1) NAME top flatpak - Build, install and run applications and runtimes SYNOPSIS top flatpak [OPTION...] {COMMAND} DESCRIPTION top Flatpak is a tool for managing applications and the runtimes they use. In the Flatpak model, applications can be built and distributed independently from the host system they are used on, and they are isolated from the host system ('sandboxed') to some degree, at runtime. Flatpak can operate in system-wide or per-user mode. The system-wide data (runtimes, applications and configuration) is located in $prefix/var/lib/flatpak/, and the per-user data is in $HOME/.local/share/flatpak/. Below these locations, there is a local repository in the repo/ subdirectory and installed runtimes and applications are in the corresponding runtime/ and app/ subdirectories. System-wide remotes can be statically preconfigured by dropping flatpakrepo files into /etc/flatpak/remotes.d/. In addition to the system-wide installation in $prefix/var/lib/flatpak/, which is always considered the default one unless overridden, more system-wide installations can be defined via configuration files in /etc/flatpak/installations.d/, which must define at least the id of the installation and the absolute path to it. Other optional parameters like DisplayName, Priority or StorageType are also supported. Flatpak uses OSTree to distribute and deploy data. The repositories it uses are OSTree repositories and can be manipulated with the ostree utility. Installed runtimes and applications are OSTree checkouts. Basic commands for building flatpaks such as build-init, build and build-finish are included in the flatpak utility. For higher-level build support, see the separate flatpak-builder(1) tool. Flatpak supports installing from sideload repos. These are partial copies of a repository (generated by flatpak create-usb) that are used as an installation source when offline (and online as a performance improvement). Such repositories are configured by creating symlinks to the sideload sources in the sideload-repos subdirectory of the installation directory (i.e. typically /var/lib/flatpak/sideload-repos or ~/.local/share/flatpak/sideload-repos). Additionally symlinks can be created in /run/flatpak/sideload-repos which is a better location for non-persistent sources (as it is cleared on reboot). These symlinks can point to either the directory given to flatpak create-usb which by default writes to the subpath .ostree/repo, or directly to an ostree repo. OPTIONS top The following global options are understood. Individual commands have their own options. -h, --help Show help options and exit. -v, --verbose Show debug information during command processing. Use -vv for more detail. --ostree-verbose Show OSTree debug information during command processing. --version Print version information and exit. --default-arch Print the default arch and exit. --supported-arches Print the supported arches in priority order and exit. --gl-drivers Print the list of active gl drivers and exit. --installations Print paths of system installations and exit. --print-system-only When the flatpak --print-updated-env command is run, only print the environment for system flatpak installations, not including the users home installation. --print-updated-env Print the set of environment variables needed to use flatpaks, amending the current set of environment variables. This is intended to be used in a systemd environment generator, and should not need to be run manually. COMMANDS top Commands for managing installed applications and runtimes: flatpak-install(1) Install an application or a runtime from a remote or bundle. flatpak-update(1) Update an installed application or runtime. flatpak-uninstall(1) Uninstall an installed application or runtime. flatpak-mask(1) Mask out updates and automatic installation. flatpak-pin(1) Pin runtimes to prevent automatic removal. flatpak-list(1) List installed applications and/or runtimes. flatpak-info(1) Show information for an installed application or runtime. flatpak-history(1) Show history. flatpak-config(1) Manage flatpak configuration. flatpak-repair(1) Repair flatpak installation. flatpak-create-usb(1) Copy apps and/or runtimes onto removable media. Commands for finding applications and runtimes: flatpak-search(1) Search for applications and runtimes. Commands for managing running applications: flatpak-run(1) Run an application. flatpak-kill(1) Stop a running application. flatpak-override(1) Override permissions for an application. flatpak-make-current(1) Specify the default version to run. flatpak-enter(1) Enter the namespace of a running application. Commands for managing file access: flatpak-document-export(1) Grant an application access to a specific file. flatpak-document-unexport(1) Revoke access to a specific file. flatpak-document-info(1) Show information about a specific file. flatpak-documents(1) List exported files. Commands for managing the dynamic permission store: flatpak-permission-remove(1) Remove item from permission store. flatpak-permissions(1) List permissions. flatpak-permission-show(1) Show app permissions. flatpak-permission-reset(1) Reset app permissions. flatpak-permission-set(1) Set app permissions. Commands for managing remote repositories: flatpak-remotes(1) List all configured remote repositories. flatpak-remote-add(1) Add a new remote repository. flatpak-remote-modify(1) Modify properties of a configured remote repository. flatpak-remote-delete(1) Delete a configured remote repository. flatpak-remote-ls(1) List contents of a configured remote repository. flatpak-remote-info(1) Show information about a ref in a configured remote repository. Commands for building applications: flatpak-build-init(1) Initialize a build directory. flatpak-build(1) Run a build command in a build directory. flatpak-build-finish(1) Finalizes a build directory for export. flatpak-build-export(1) Export a build directory to a repository. flatpak-build-bundle(1) Create a bundle file from a ref in a local repository. flatpak-build-import-bundle(1) Import a file bundle into a local repository. flatpak-build-sign(1) Sign an application or runtime after its been exported. flatpak-build-update-repo(1) Update the summary file in a repository. flatpak-build-commit-from(1) Create a new commit based on an existing ref. flatpak-repo(1) Print information about a repo. Commands available inside the sandbox: flatpak-spawn(1) Run a command in another sandbox. FILE FORMATS top File formats that are used by Flatpak commands: flatpak-flatpakref(5) Reference to a remote for an application or runtime flatpak-flatpakrepo(5) Reference to a remote flatpak-remote(5) Configuration for a remote flatpak-installation(5) Configuration for an installation location flatpak-metadata(5) Information about an application or runtime ENVIRONMENT top Besides standard environment variables such as XDG_DATA_DIRS and XDG_DATA_HOME, flatpak is consulting some of its own. FLATPAK_USER_DIR The location of the per-user installation. If this is not set, $XDG_DATA_HOME/flatpak is used. FLATPAK_SYSTEM_DIR The location of the default system-wide installation. If this is not set, /var/lib/flatpak is used (unless overridden at build time by --localstatedir or --with-system-install-dir). FLATPAK_SYSTEM_CACHE_DIR The location where temporary child repositories will be created during pulls into the system-wide installation. If this is not set, a directory in /var/tmp/ is used. This is useful because it is more likely to be on the same filesystem as the system repository (thus increasing the chances for e.g. reflink copying), and we can avoid filling the user's home directory with temporary data. FLATPAK_CONFIG_DIR The location of flatpak site configuration. If this is not set, /etc/flatpak is used (unless overridden at build time by --sysconfdir). FLATPAK_RUN_DIR The location of flatpak runtime global files. If this is not set, /run/flatpak is used. SEE ALSO top ostree(1), ostree.repo(5), flatpak-remote(5), flatpak-installation(5), https://www.flatpak.org COLOPHON top This page is part of the flatpak (a tool for building and distributing desktop applications on Linux) project. Information about the project can be found at http://flatpak.org/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository https://github.com/flatpak/flatpak on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-08.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org flatpak FLATPAK(1) Pages that refer to this page: flatpak-build(1), flatpak-build-bundle(1), flatpak-build-commit-from(1), flatpak-build-export(1), flatpak-build-finish(1), flatpak-build-import-bundle(1), flatpak-build-init(1), flatpak-build-sign(1), flatpak-build-update-repo(1), flatpak-config(1), flatpak-create-usb(1), flatpak-document-export(1), flatpak-document-info(1), flatpak-documents(1), flatpak-document-unexport(1), flatpak-enter(1), flatpak-history(1), flatpak-info(1), flatpak-install(1), flatpak-kill(1), flatpak-list(1), flatpak-make-current(1), flatpak-mask(1), flatpak-override(1), flatpak-permission-remove(1), flatpak-permission-reset(1), flatpak-permissions(1), flatpak-permission-set(1), flatpak-permission-show(1), flatpak-pin(1), flatpak-ps(1), flatpak-remote-add(1), flatpak-remote-delete(1), flatpak-remote-info(1), flatpak-remote-ls(1), flatpak-remote-modify(1), flatpak-remotes(1), flatpak-repair(1), flatpak-repo(1), flatpak-run(1), flatpak-search(1), flatpak-spawn(1), flatpak-update(1), flatpak-flatpakref(5), flatpak-flatpakrepo(5), flatpak-metadata(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# flatpak\n\n> Build, install and run flatpak applications and runtimes.\n> More information: <https://docs.flatpak.org/en/latest/flatpak-command-reference.html#flatpak>.\n\n- Run an installed application:\n\n`flatpak run {{name}}`\n\n- Install an application from a remote source:\n\n`flatpak install {{remote}} {{name}}`\n\n- List installed applications, ignoring runtimes:\n\n`flatpak list --app`\n\n- Update all installed applications and runtimes:\n\n`flatpak update`\n\n- Add a remote source:\n\n`flatpak remote-add --if-not-exists {{remote_name}} {{remote_url}}`\n\n- Remove an installed application:\n\n`flatpak remove {{name}}`\n\n- Remove all unused applications:\n\n`flatpak remove --unused`\n\n- Show information about an installed application:\n\n`flatpak info {{name}}`\n
flock
flock(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training flock(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | NOTES | EXAMPLES | AUTHORS | COPYRIGHT | SEE ALSO | REPORTING BUGS | AVAILABILITY FLOCK(1) User Commands FLOCK(1) NAME top flock - manage locks from shell scripts SYNOPSIS top flock [options] file|directory command [arguments] flock [options] file|directory -c command flock [options] number DESCRIPTION top This utility manages flock(2) locks from within shell scripts or from the command line. The first and second of the above forms wrap the lock around the execution of a command, in a manner similar to su(1) or newgrp(1). They lock a specified file or directory, which is created (assuming appropriate permissions) if it does not already exist. By default, if the lock cannot be immediately acquired, flock waits until the lock is available. The third form uses an open file by its file descriptor number. See the examples below for how that can be used. OPTIONS top -c, --command command Pass a single command, without arguments, to the shell with -c. -E, --conflict-exit-code number The exit status used when the -n option is in use, and the conflicting lock exists, or the -w option is in use, and the timeout is reached. The default value is 1. The number has to be in the range of 0 to 255. -F, --no-fork Do not fork before executing command. Upon execution the flock process is replaced by command which continues to hold the lock. This option is incompatible with --close as there would otherwise be nothing left to hold the lock. -e, -x, --exclusive Obtain an exclusive lock, sometimes called a write lock. This is the default. -n, --nb, --nonblock Fail rather than wait if the lock cannot be immediately acquired. See the -E option for the exit status used. -o, --close Close the file descriptor on which the lock is held before executing command. This is useful if command spawns a child process which should not be holding the lock. -s, --shared Obtain a shared lock, sometimes called a read lock. -u, --unlock Drop a lock. This is usually not required, since a lock is automatically dropped when the file is closed. However, it may be required in special cases, for example if the enclosed command group may have forked a background process which should not be holding the lock. -w, --wait, --timeout seconds Fail if the lock cannot be acquired within seconds. Decimal fractional values are allowed. See the -E option for the exit status used. The zero number of seconds is interpreted as --nonblock. --verbose Report how long it took to acquire the lock, or why the lock could not be obtained. -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top The command uses <sysexits.h> exit status values for everything, except when using either of the options -n or -w which report a failure to acquire the lock with an exit status given by the -E option, or 1 by default. The exit status given by -E has to be in the range of 0 to 255. When using the command variant, and executing the child worked, then the exit status is that of the child command. NOTES top flock does not detect deadlock. See flock(2) for details. Some file systems (e. g. NFS and CIFS) have a limited implementation of flock(2) and flock may always fail. For details see flock(2), nfs(5) and mount.cifs(8). Depending on mount options, flock can always fail there. EXAMPLES top Note that "shell> " in examples is a command line prompt. shell1> flock /tmp -c cat; shell2> flock -w .007 /tmp -c echo; /bin/echo $? Set exclusive lock to directory /tmp and the second command will fail. shell1> flock -s /tmp -c cat; shell2> flock -s -w .007 /tmp -c echo; /bin/echo $? Set shared lock to directory /tmp and the second command will not fail. Notice that attempting to get exclusive lock with second command would fail. shell> flock -x local-lock-file echo 'a b c' Grab the exclusive lock "local-lock-file" before running echo with 'a b c'. (; flock -n 9 || exit 1; # ... commands executed under lock ...; ) 9>/var/lock/mylockfile The form is convenient inside shell scripts. The mode used to open the file doesnt matter to flock; using > or >> allows the lockfile to be created if it does not already exist, however, write permission is required. Using < requires that the file already exists but only read permission is required. [ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || : This is useful boilerplate code for shell scripts. Put it at the top of the shell script you want to lock and itll automatically lock itself on the first run. If the environment variable $FLOCKER is not set to the shell script that is being run, then execute flock and grab an exclusive non-blocking lock (using the script itself as the lock file) before re-execing itself with the right arguments. It also sets the FLOCKER environment variable to the right value so it doesnt run again. shell> exec 4<>/var/lock/mylockfile; shell> flock -n 4 This form is convenient for locking a file without spawning a subprocess. The shell opens the lock file for reading and writing as file descriptor 4, then flock is used to lock the descriptor. AUTHORS top H. Peter Anvin <hpa@zytor.com> COPYRIGHT top Copyright 2003-2006 H. Peter Anvin. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. SEE ALSO top flock(2) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The flock command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 FLOCK(1) Pages that refer to this page: flock(2), losetup(8), lslocks(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# flock\n\n> Manage locks from shell scripts.\n> It can be used to ensure that only one process of a command is running.\n> More information: <https://manned.org/flock>.\n\n- Run a command with a file lock as soon as the lock is not required by others:\n\n`flock {{path/to/lock.lock}} --command "{{command}}"`\n\n- Run a command with a file lock, and exit if the lock doesn't exist:\n\n`flock {{path/to/lock.lock}} --nonblock --command "{{command}}"`\n\n- Run a command with a file lock, and exit with a specific error code if the lock doesn't exist:\n\n`flock {{path/to/lock.lock}} --nonblock --conflict-exit-code {{error_code}} -c "{{command}}"`\n
flow
tc-flow(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training tc-flow(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | KEYS | EXAMPLES | SEE ALSO | COLOPHON Flow filter in tc(8) Linux Flow filter in tc(8) NAME top flow - flow based traffic control filter SYNOPSIS top Mapping mode: tc filter ... flow map key KEY [ OPS ] [ OPTIONS ] Hashing mode: tc filter ... flow hash keys KEY_LIST [ perturb secs ] [ OPTIONS ] OPS := [ OPS ] OP OPTIONS := [ divisor NUM ] [ baseclass ID ] [ match EMATCH_TREE ] [ action ACTION_SPEC ] KEY_LIST := [ KEY_LIST ] KEY OP := { or | and | xor | rshift | addend } NUM ID := X:Y KEY := { src | dst | proto | proto-src | proto-dst | iif | priority | mark | nfct | nfct-src | nfct-dst | nfct- proto-src | nfct-proto-dst | rt-classid | sk-uid | sk-gid | vlan-tag | rxhash } DESCRIPTION top The flow classifier is meant to extend the SFQ hashing capabilities without hard-coding new hash functions. It also allows deterministic mappings of keys to classes. OPTIONS top action ACTION_SPEC Apply an action from the generic actions framework on matching packets. baseclass ID An offset for the resulting class ID. ID may be root, none or a hexadecimal class ID in the form [X:]Y. X must match qdisc's/class's major handle (if omitted, the correct value is chosen automatically). If the whole baseclass is omitted, Y defaults to 1. divisor NUM Number of buckets to use for sorting into. Keys are calculated modulo NUM. hash keys KEY-LIST Perform a jhash2 operation over the keys in KEY-LIST, the result (modulo the divisor if given) is taken as class ID, optionally offset by the value of baseclass. It is possible to specify an interval (in seconds) after which jhash2's entropy source is recreated using the perturb parameter. map key KEY Packet data identified by KEY is translated into class IDs to push the packet into. The value may be mangled by OPS before using it for the mapping. They are applied in the order listed here: and NUM Perform bitwise AND operation with numeric value NUM. or NUM Perform bitwise OR operation with numeric value NUM. xor NUM Perform bitwise XOR operation with numeric value NUM. rshift NUM Shift the value of KEY to the right by NUM bits. addend NUM Add NUM to the value of KEY. For the or, and, xor and rshift operations, NUM is assumed to be an unsigned, 32bit integer value. For the addend operation, NUM may be much more complex: It may be prefixed by a minus ('-') sign to cause subtraction instead of addition and for keys of src, dst, nfct-src and nfct-dst it may be given in IP address notation. See below for an illustrating example. match EMATCH_TREE Match packets using the extended match infrastructure. See tc-ematch(8) for a detailed description of the allowed syntax in EMATCH_TREE. KEYS top In mapping mode, a single key is used (after optional permutation) to build a class ID. The resulting ID is deducible in most cases. In hashing more, a number of keys may be specified which are then hashed and the output used as class ID. This ID is not deducible in beforehand, and may even change over time for a given flow if a perturb interval has been given. The range of class IDs can be limited by the divisor option, which is used for a modulus. src, dst Use source or destination address as key. In case of IPv4 and TIPC, this is the actual address value. For IPv6, the 128bit address is folded into a 32bit value by XOR'ing the four 32bit words. In all other cases, the kernel-internal socket address is used (after folding into 32bits on 64bit systems). proto Use the layer four protocol number as key. proto-src Use the layer four source port as key. If not available, the kernel-internal socket address is used instead. proto-dst Use the layer four destination port as key. If not available, the associated kernel-internal dst_entry address is used after XOR'ing with the packet's layer three protocol number. iif Use the incoming interface index as key. priority Use the packet's priority as key. Usually this is the IP header's DSCP/ECN value. mark Use the netfilter fwmark as key. nfct Use the associated conntrack entry address as key. nfct-src, nfct-dst, nfct-proto-src, nfct-proto-dst These are conntrack-aware variants of src, dst, proto-src and proto-dst. In case of NAT, these are basically the packet header's values before NAT was applied. rt-classid Use the packet's destination routing table entry's realm as key. sk-uid sk-gid For locally generated packets, use the user or group ID the originating socket belongs to as key. vlan-tag Use the packet's vlan ID as key. rxhash Use the flow hash as key. EXAMPLES top Classic SFQ hash: tc filter add ... flow hash \ keys src,dst,proto,proto-src,proto-dst divisor 1024 Classic SFQ hash, but using information from conntrack to work properly in combination with NAT: tc filter add ... flow hash \ keys nfct-src,nfct-dst,proto,nfct-proto-src,nfct-proto-dst \ divisor 1024 Map destination IPs of 192.168.0.0/24 to classids 1-256: tc filter add ... flow map \ key dst addend -192.168.0.0 divisor 256 Alternative to the above: tc filter add ... flow map \ key dst and 0xff The same, but in reverse order: tc filter add ... flow map \ key dst and 0xff xor 0xff SEE ALSO top tc(8), tc-ematch(8), tc-sfq(8) COLOPHON top This page is part of the iproute2 (utilities for controlling TCP/IP networking and traffic) project. Information about the project can be found at http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2. If you have a bug report for this manual page, send it to netdev@vger.kernel.org, shemminger@osdl.org. This page was obtained from the project's upstream Git repository https://git.kernel.org/pub/scm/network/iproute2/iproute2.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org iproute2 20 Oct 2015 Flow filter in tc(8) Pages that refer to this page: tc(8), tc-flower(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# flow\n\n> A static type checker for JavaScript.\n> More information: <https://flow.org>.\n\n- Run a flow check:\n\n`flow`\n\n- Check which files are being checked by flow:\n\n`flow ls`\n\n- Run a type coverage check on all files in a directory:\n\n`flow batch-coverage --show-all --strip-root {{path/to/directory}}`\n\n- Display line-by-line type coverage stats:\n\n`flow coverage --color {{path/to/file.jsx}}`\n
fmt
fmt(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fmt(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON FMT(1) User Commands FMT(1) NAME top fmt - simple optimal text formatter SYNOPSIS top fmt [-WIDTH] [OPTION]... [FILE]... DESCRIPTION top Reformat each paragraph in the FILE(s), writing to standard output. The option -WIDTH is an abbreviated form of --width=DIGITS. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --crown-margin preserve indentation of first two lines -p, --prefix=STRING reformat only lines beginning with STRING, reattaching the prefix to reformatted lines -s, --split-only split long lines, but do not refill -t, --tagged-paragraph indentation of first line different from second -u, --uniform-spacing one space between words, two after sentences -w, --width=WIDTH maximum line width (default of 75 columns) -g, --goal=WIDTH goal width (default of 93% of width) --help display this help and exit --version output version information and exit AUTHOR top Written by Ross Paterson. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/fmt> or available locally via: info '(coreutils) fmt invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 FMT(1) Pages that refer to this page: fold(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fmt\n\n> Reformat a text file by joining its paragraphs and limiting the line width to a number of characters (75 by default).\n> More information: <https://www.gnu.org/software/coreutils/fmt>.\n\n- Reformat a file:\n\n`fmt {{path/to/file}}`\n\n- Reformat a file producing output lines of (at most) `n` characters:\n\n`fmt -w {{n}} {{path/to/file}}`\n\n- Reformat a file without joining lines shorter than the given width together:\n\n`fmt -s {{path/to/file}}`\n\n- Reformat a file with uniform spacing (1 space between words and 2 spaces between paragraphs):\n\n`fmt -u {{path/to/file}}`\n
fold
fold(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fold(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON FOLD(1) User Commands FOLD(1) NAME top fold - wrap each input line to fit in specified width SYNOPSIS top fold [OPTION]... [FILE]... DESCRIPTION top Wrap input lines in each FILE, writing to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --bytes count bytes rather than columns -s, --spaces break at spaces -w, --width=WIDTH use WIDTH columns instead of 80 --help display this help and exit --version output version information and exit AUTHOR top Written by David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top fmt(1) Full documentation <https://www.gnu.org/software/coreutils/fold> or available locally via: info '(coreutils) fold invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 FOLD(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fold\n\n> Folds long lines for fixed-width output devices.\n> More information: <https://www.gnu.org/software/coreutils/fold>.\n\n- Fold lines in a fixed width:\n\n`fold --width {{width}} {{path/to/file}}`\n\n- Count width in bytes (the default is to count in columns):\n\n`fold --bytes --width {{width_in_bytes}} {{path/to/file}}`\n\n- Break the line after the rightmost blank within the width limit:\n\n`fold --spaces --width {{width}} {{path/to/file}}`\n
free
free(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training free(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | BUGS | SEE ALSO | COLOPHON FREE(1) User Commands FREE(1) NAME top free - Display amount of free and used memory in the system SYNOPSIS top free [options] DESCRIPTION top free displays the total amount of free and used physical and swap memory in the system, as well as the buffers and caches used by the kernel. The information is gathered by parsing /proc/meminfo. The displayed columns are: total Total usable memory (MemTotal and SwapTotal in /proc/meminfo). This includes the physical and swap memory minus a few reserved bits and kernel binary code. used Used or unavailable memory (calculated as total - available) free Unused memory (MemFree and SwapFree in /proc/meminfo) shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo) buffers Memory used by kernel buffers (Buffers in /proc/meminfo) cache Memory used by the page cache and slabs (Cached and SReclaimable in /proc/meminfo) buff/cache Sum of buffers and cache available Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free) OPTIONS top -b, --bytes Display the amount of memory in bytes. -k, --kibi Display the amount of memory in kibibytes. This is the default. -m, --mebi Display the amount of memory in mebibytes. -g, --gibi Display the amount of memory in gibibytes. --tebi Display the amount of memory in tebibytes. --pebi Display the amount of memory in pebibytes. --kilo Display the amount of memory in kilobytes. Implies --si. --mega Display the amount of memory in megabytes. Implies --si. --giga Display the amount of memory in gigabytes. Implies --si. --tera Display the amount of memory in terabytes. Implies --si. --peta Display the amount of memory in petabytes. Implies --si. -h, --human Show all output fields automatically scaled to shortest three digit unit and display the units of print out. Following units are used. B = bytes Ki = kibibyte Mi = mebibyte Gi = gibibyte Ti = tebibyte Pi = pebibyte If unit is missing, and you have exbibyte of RAM or swap, the number is in tebibytes and columns might not be aligned with header. -w, --wide Switch to the wide mode. The wide mode produces lines longer than 80 characters. In this mode buffers and cache are reported in two separate columns. -c, --count count Display the result count times. Requires the -s option. -l, --lohi Show detailed low and high memory statistics. -L, --line Show output on a single line, often used with the -s option to show memory statistics repeatedly. -s, --seconds delay Continuously display the result delay seconds apart. You may actually specify any floating point number for delay using either . or , for decimal point. usleep(3) is used for microsecond resolution delay times. --si Use kilo, mega, giga etc (power of 1000) instead of kibi, mebi, gibi (power of 1024). -t, --total Display a line showing the column totals. -v, --committed Display a line showing the memory commit limit and amount of committed/uncommitted memory. The total column on this line will display the memory commit limit. This line is relevant if memory overcommit is disabled. --help Print help. -V, --version Display version information. FILES top /proc/meminfo memory information BUGS top The value for the shared column is not available from kernels before 2.6.32 and is displayed as zero. Please send bug reports to procps@freelists.org SEE ALSO top ps(1), slabtop(1), top(1), vmstat(8). COLOPHON top This page is part of the procps-ng (/proc filesystem utilities) project. Information about the project can be found at https://gitlab.com/procps-ng/procps. If you have a bug report for this manual page, see https://gitlab.com/procps-ng/procps/blob/master/Documentation/bugs.md. This page was obtained from the project's upstream Git repository https://gitlab.com/procps-ng/procps.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-16.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org procps-ng 2023-05-02 FREE(1) Pages that refer to this page: htop(1), pcp-free(1), slabtop(1), top(1), w(1), proc(5), tmpfs(5), vmstat(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# free\n\n> Display amount of free and used memory in the system.\n> More information: <https://manned.org/free>.\n\n- Display system memory:\n\n`free`\n\n- Display memory in Bytes/KB/MB/GB:\n\n`free -{{b|k|m|g}}`\n\n- Display memory in human-readable units:\n\n`free -h`\n\n- Refresh the output every 2 seconds:\n\n`free -s {{2}}`\n
fsck
fsck(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training Another version of this page is provided by the e2fsprogs project fsck(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILESYSTEM SPECIFIC OPTIONS | ENVIRONMENT | FILES | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY FSCK(8) System Administration FSCK(8) NAME top fsck - check and repair a Linux filesystem SYNOPSIS top fsck [-lsAVRTMNP] [-r [fd]] [-C [fd]] [-t fstype] [filesystem...] [--] [fs-specific-options] DESCRIPTION top fsck is used to check and optionally repair one or more Linux filesystems. filesystem can be a device name (e.g., /dev/hdc1, /dev/sdb2), a mount point (e.g., /, /usr, /home), or a filesystem label or UUID specifier (e.g., UUID=8868abf6-88c5-4a83-98b8-bfc24057f7bd or LABEL=root). Normally, the fsck program will try to handle filesystems on different physical disk drives in parallel to reduce the total amount of time needed to check all of them. If no filesystems are specified on the command line, and the -A option is not specified, fsck will default to checking filesystems in /etc/fstab serially. This is equivalent to the -As options. The exit status returned by fsck is the sum of the following conditions: 0 No errors 1 Filesystem errors corrected 2 System should be rebooted 4 Filesystem errors left uncorrected 8 Operational error 16 Usage or syntax error 32 Checking canceled by user request 128 Shared-library error The exit status returned when multiple filesystems are checked is the bit-wise OR of the exit statuses for each filesystem that is checked. In actuality, fsck is simply a front-end for the various filesystem checkers (fsck.fstype) available under Linux. The filesystem-specific checker is searched for in the PATH environment variable. If the PATH is undefined then fallback to /sbin. Please see the filesystem-specific checker manual pages for further details. OPTIONS top -l Create an exclusive flock(2) lock file (/run/fsck/<diskname>.lock) for whole-disk device. This option can be used with one device only (this means that -A and -l are mutually exclusive). This option is recommended when more fsck instances are executed in the same time. The option is ignored when used for multiple devices or for non-rotating disks. fsck does not lock underlying devices when executed to check stacked devices (e.g. MD or DM) - this feature is not implemented yet. -r [fd] Report certain statistics for each fsck when it completes. These statistics include the exit status, the maximum run set size (in kilobytes), the elapsed all-clock time and the user and system CPU time used by the fsck run. For example: /dev/sda1: status 0, rss 92828, real 4.002804, user 2.677592, sys 0.86186 GUI front-ends may specify a file descriptor fd, in which case the progress bar information will be sent to that file descriptor in a machine parsable format. For example: /dev/sda1 0 92828 4.002804 2.677592 0.86186 -s Serialize fsck operations. This is a good idea if you are checking multiple filesystems and the checkers are in an interactive mode. (Note: e2fsck(8) runs in an interactive mode by default. To make e2fsck(8) run in a non-interactive mode, you must either specify the -p or -a option, if you wish for errors to be corrected automatically, or the -n option if you do not.) -t fslist Specifies the type(s) of filesystem to be checked. When the -A flag is specified, only filesystems that match fslist are checked. The fslist parameter is a comma-separated list of filesystems and options specifiers. All of the filesystems in this comma-separated list may be prefixed by a negation operator 'no' or '!', which requests that only those filesystems not listed in fslist will be checked. If none of the filesystems in fslist is prefixed by a negation operator, then only those listed filesystems will be checked. Options specifiers may be included in the comma-separated fslist. They must have the format opts=fs-option. If an options specifier is present, then only filesystems which contain fs-option in their mount options field of /etc/fstab will be checked. If the options specifier is prefixed by a negation operator, then only those filesystems that do not have fs-option in their mount options field of /etc/fstab will be checked. For example, if opts=ro appears in fslist, then only filesystems listed in /etc/fstab with the ro option will be checked. For compatibility with Mandrake distributions whose boot scripts depend upon an unauthorized UI change to the fsck program, if a filesystem type of loop is found in fslist, it is treated as if opts=loop were specified as an argument to the -t option. Normally, the filesystem type is deduced by searching for filesys in the /etc/fstab file and using the corresponding entry. If the type cannot be deduced, and there is only a single filesystem given as an argument to the -t option, fsck will use the specified filesystem type. If this type is not available, then the default filesystem type (currently ext2) is used. -A Walk through the /etc/fstab file and try to check all filesystems in one run. This option is typically used from the /etc/rc system initialization file, instead of multiple commands for checking a single filesystem. The root filesystem will be checked first unless the -P option is specified (see below). After that, filesystems will be checked in the order specified by the fs_passno (the sixth) field in the /etc/fstab file. Filesystems with a fs_passno value of 0 are skipped and are not checked at all. Filesystems with a fs_passno value of greater than zero will be checked in order, with filesystems with the lowest fs_passno number being checked first. If there are multiple filesystems with the same pass number, fsck will attempt to check them in parallel, although it will avoid running multiple filesystem checks on the same physical disk. fsck does not check stacked devices (RAIDs, dm-crypt, ...) in parallel with any other device. See below for FSCK_FORCE_ALL_PARALLEL setting. The /sys filesystem is used to determine dependencies between devices. Hence, a very common configuration in /etc/fstab files is to set the root filesystem to have a fs_passno value of 1 and to set all other filesystems to have a fs_passno value of 2. This will allow fsck to automatically run filesystem checkers in parallel if it is advantageous to do so. System administrators might choose not to use this configuration if they need to avoid multiple filesystem checks running in parallel for some reason - for example, if the machine in question is short on memory so that excessive paging is a concern. fsck normally does not check whether the device actually exists before calling a filesystem specific checker. Therefore non-existing devices may cause the system to enter filesystem repair mode during boot if the filesystem specific checker returns a fatal error. The /etc/fstab mount option nofail may be used to have fsck skip non-existing devices. fsck also skips non-existing devices that have the special filesystem type auto. -C [fd] Display completion/progress bars for those filesystem checkers (currently only for ext[234]) which support them. fsck will manage the filesystem checkers so that only one of them will display a progress bar at a time. GUI front-ends may specify a file descriptor fd, in which case the progress bar information will be sent to that file descriptor. -M Do not check mounted filesystems and return an exit status of 0 for mounted filesystems. -N Dont execute, just show what would be done. -P When the -A flag is set, check the root filesystem in parallel with the other filesystems. This is not the safest thing in the world to do, since if the root filesystem is in doubt things like the e2fsck(8) executable might be corrupted! This option is mainly provided for those sysadmins who dont want to repartition the root filesystem to be small and compact (which is really the right solution). -R When checking all filesystems with the -A flag, skip the root filesystem. (This is useful in case the root filesystem has already been mounted read-write.) -T Dont show the title on startup. -V Produce verbose output, including all filesystem-specific commands that are executed. -?, --help Display help text and exit. --version Display version information and exit. FILESYSTEM SPECIFIC OPTIONS top Options which are not understood by fsck are passed to the filesystem-specific checker! These options must not take arguments, as there is no way for fsck to be able to properly guess which options take arguments and which dont. Options and arguments which follow the -- are treated as filesystem-specific options to be passed to the filesystem-specific checker. Please note that fsck is not designed to pass arbitrarily complicated options to filesystem-specific checkers. If youre doing something complicated, please just execute the filesystem-specific checker directly. If you pass fsck some horribly complicated options and arguments, and it doesnt do what you expect, dont bother reporting it as a bug. Youre almost certainly doing something that you shouldnt be doing with fsck. Options to different filesystem-specific fscks are not standardized. ENVIRONMENT top The fsck programs behavior is affected by the following environment variables: FSCK_FORCE_ALL_PARALLEL If this environment variable is set, fsck will attempt to check all of the specified filesystems in parallel, regardless of whether the filesystems appear to be on the same device. (This is useful for RAID systems or high-end storage systems such as those sold by companies such as IBM or EMC.) Note that the fs_passno value is still used. FSCK_MAX_INST This environment variable will limit the maximum number of filesystem checkers that can be running at one time. This allows configurations which have a large number of disks to avoid fsck starting too many filesystem checkers at once, which might overload CPU and memory resources available on the system. If this value is zero, then an unlimited number of processes can be spawned. This is currently the default, but future versions of fsck may attempt to automatically determine how many filesystem checks can be run based on gathering accounting data from the operating system. PATH The PATH environment variable is used to find filesystem checkers. FSTAB_FILE This environment variable allows the system administrator to override the standard location of the /etc/fstab file. It is also useful for developers who are testing fsck. LIBBLKID_DEBUG=all enables libblkid debug output. LIBMOUNT_DEBUG=all enables libmount debug output. FILES top /etc/fstab AUTHORS top Theodore Tso <tytso@mit.edu>>, Karel Zak <kzak@redhat.com> SEE ALSO top fstab(5), mkfs(8), fsck.ext2(8) or fsck.ext3(8) or e2fsck(8), fsck.cramfs(8), fsck.jfs(8), fsck.nfs(8), fsck.minix(8), fsck.msdos(8), fsck.vfat(8), fsck.xfs(8), reiserfsck(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The fsck command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 FSCK(8) Pages that refer to this page: systemd-dissect(1), filesystems(5), fstab(5), e2mmpstatus(8), fsadm(8), fsck.btrfs(8), fsck.minix(8), fsck.xfs(8), logsave(8), mkfs(8), mkfs.minix(8), quotacheck(8), systemd-fsck@.service(8), tune2fs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fsck\n\n> Check the integrity of a filesystem or repair it. The filesystem should be unmounted at the time the command is run.\n> More information: <https://manned.org/fsck>.\n\n- Check filesystem `/dev/sdXN`, reporting any damaged blocks:\n\n`sudo fsck {{/dev/sdXN}}`\n\n- Check filesystem `/dev/sdXN`, reporting any damaged blocks and interactively letting the user choose to repair each one:\n\n`sudo fsck -r {{/dev/sdXN}}`\n\n- Check filesystem `/dev/sdXN`, reporting any damaged blocks and automatically repairing them:\n\n`sudo fsck -a {{/dev/sdXN}}`\n
fstrim
fstrim(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fstrim(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY FSTRIM(8) System Administration FSTRIM(8) NAME top fstrim - discard unused blocks on a mounted filesystem SYNOPSIS top fstrim [-Aav] [-o offset] [-l length] [-m minimum-size] [mountpoint] DESCRIPTION top fstrim is used on a mounted filesystem to discard (or "trim") blocks which are not in use by the filesystem. This is useful for solid-state drives (SSDs) and thinly-provisioned storage. By default, fstrim will discard all unused blocks in the filesystem. Options may be used to modify this behavior based on range or size, as explained below. The mountpoint argument is the pathname of the directory where the filesystem is mounted and is required when -A, -a, --fstab, or --all are unspecified. Running fstrim frequently, or even using mount -o discard, might negatively affect the lifetime of poor-quality SSD devices. For most desktop and server systems a sufficient trimming frequency is once a week. Note that not all devices support a queued trim, so each trim command incurs a performance penalty on whatever else might be trying to use the disk at the time. OPTIONS top The offset, length, and minimum-size arguments may be followed by the multiplicative suffixes KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB") or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, TB, PB, EB, ZB and YB. -A, --fstab Trim all mounted filesystems mentioned in /etc/fstab on devices that support the discard operation. The root filesystem is determined from kernel command line if missing in the file. The other supplied options, like --offset, --length and --minimum, are applied to all these devices. Errors from filesystems that do not support the discard operation, read-only devices, autofs and read-only filesystems are silently ignored. Filesystems with "X-fstrim.notrim" mount option are skipped. -a, --all Trim all mounted filesystems on devices that support the discard operation. The other supplied options, like --offset, --length and --minimum, are applied to all these devices. Errors from filesystems that do not support the discard operation, read-only devices and read-only filesystems are silently ignored. -n, --dry-run This option does everything apart from actually call FITRIM ioctl. -o, --offset offset Byte offset in the filesystem from which to begin searching for free blocks to discard. The default value is zero, starting at the beginning of the filesystem. -l, --length length The number of bytes (after the starting point) to search for free blocks to discard. If the specified value extends past the end of the filesystem, fstrim will stop at the filesystem size boundary. The default value extends to the end of the filesystem. -I, --listed-in list Specifies a colon-separated list of files in fstab or kernel mountinfo format. All missing or empty files are silently ignored. The evaluation of the list stops after first non-empty file. For example: --listed-in /etc/fstab:/proc/self/mountinfo. Filesystems with "X-fstrim.notrim" mount option in fstab are skipped. -m, --minimum minimum-size Minimum contiguous free range to discard, in bytes. (This value is internally rounded up to a multiple of the filesystem block size.) Free ranges smaller than this will be ignored and fstrim will adjust the minimum if its smaller than the devices minimum, and report that (fstrim_range.minlen) back to userspace. By increasing this value, the fstrim operation will complete more quickly for filesystems with badly fragmented freespace, although not all blocks will be discarded. The default value is zero, discarding every free block. -t, --types list Specifies allowed or forbidden filesystem types when used with --all or --fstab. The list is a comma-separated list of the filesystem names. The list follows how mount -t evaluates type patterns. Only specified filesystem types are allowed. All specified types are forbidden if the list is prefixed by "no" or each filesystem prefixed by "no" is forbidden. If the option is not used, then all filesystems (except "autofs") are allowed. -v, --verbose Verbose execution. With this option fstrim will output the number of bytes passed from the filesystem down the block stack to the device for potential discard. This number is a maximum discard amount from the storage devices perspective, because FITRIM ioctl called repeated will keep sending the same sectors for discard repeatedly. fstrim will report the same potential discard bytes each time, but only sectors which had been written to between the discards would actually be discarded by the storage device. Further, the kernel block layer reserves the right to adjust the discard ranges to fit raid stripe geometry, non-trim capable devices in a LVM setup, etc. These reductions would not be reflected in fstrim_range.len (the --length option). --quiet-unsupported Suppress error messages if trim operation (ioctl) is unsupported. This option is meant to be used in systemd service file or in cron(8) scripts to hide warnings that are result of known problems, such as NTFS driver reporting Bad file descriptor when device is mounted read-only, or lack of file system support for ioctl FITRIM call. This option also cleans exit status when unsupported filesystem specified on fstrim command line. -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top 0 success 1 failure 32 all failed 64 some filesystem discards have succeeded, some failed The command fstrim --all returns 0 (all succeeded), 32 (all failed) or 64 (some failed, some succeeded). AUTHORS top Lukas Czerner <lczerner@redhat.com>, Karel Zak <kzak@redhat.com> SEE ALSO top blkdiscard(8), mount(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The fstrim command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 FSTRIM(8) Pages that refer to this page: blkdiscard(8), dmeventd(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fstrim\n\n> Discard unused blocks on a mounted filesystem.\n> Only supported by flash memory devices such as SSDs and microSD cards.\n> More information: <https://manned.org/fstrim>.\n\n- Trim unused blocks on all mounted partitions that support it:\n\n`sudo fstrim --all`\n\n- Trim unused blocks on a specified partition:\n\n`sudo fstrim {{/}}`\n\n- Display statistics after trimming:\n\n`sudo fstrim --verbose {{/}}`\n
fuser
fuser(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training fuser(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | EXAMPLES | RESTRICTIONS | BUGS | SEE ALSO | COLOPHON FUSER(1) User Commands FUSER(1) NAME top fuser - identify processes using files or sockets SYNOPSIS top fuser [-fuv] [-a|-s] [-4|-6] [-c|-m|-n space] [ -k [-i] [-M] [-w] [-SIGNAL] ] name ... fuser -l fuser -V DESCRIPTION top fuser displays the PIDs of processes using the specified files or file systems. In the default display mode, each file name is followed by a letter denoting the type of access: c current directory. e executable being run. f open file. f is omitted in default display mode. F open file for writing. F is omitted in default display mode. r root directory. m mmap'ed file or shared library. . Placeholder, omitted in default display mode. fuser returns a non-zero return code if none of the specified files is accessed or in case of a fatal error. If at least one access has been found, fuser returns zero. In order to look up processes using TCP and UDP sockets, the corresponding name space has to be selected with the -n option. By default fuser will look in both IPv6 and IPv4 sockets. To change the default behavior, use the -4 and -6 options. The socket(s) can be specified by the local and remote port, and the remote address. All fields are optional, but commas in front of missing fields must be present: [lcl_port][,[rmt_host][,[rmt_port]]] Either symbolic or numeric values can be used for IP addresses and port numbers. fuser outputs only the PIDs to stdout, everything else is sent to stderr. OPTIONS top -a, --all Show all files specified on the command line. By default, only files that are accessed by at least one process are shown. -c Same as -m option, used for POSIX compatibility. -f Silently ignored, used for POSIX compatibility. -k, --kill Kill processes accessing the file. Unless changed with -SIGNAL, SIGKILL is sent. An fuser process never kills itself, but may kill other fuser processes. The effective user ID of the process executing fuser is set to its real user ID before attempting to kill. -i, --interactive Ask the user for confirmation before killing a process. This option is silently ignored if -k is not present too. -I, --inode For the name space file let all comparisons be based on the inodes of the specified file(s) and never on the file names even on network based file systems. -l, --list-signals List all known signal names. -m NAME, --mount NAME NAME specifies a file on a mounted file system or a block device that is mounted. All processes accessing files on that file system are listed. If a directory is specified, it is automatically changed to NAME/ to use any file system that might be mounted on that directory. -M, --ismountpoint Request will be fulfilled only if NAME specifies a mountpoint. This is an invaluable seat belt which prevents you from killing the machine if NAME happens to not be a filesystem. -w Kill only processes which have write access. This option is silently ignored if -k is not present too. -n NAMESPACE, --namespace NAMESPACE Select a different name space. The name spaces file (file names, the default), udp (local UDP ports), and tcp (local TCP ports) are supported. For ports, either the port number or the symbolic name can be specified. If there is no ambiguity, the shortcut notation name/space (e.g., 80/tcp) can be used. -s, --silent Silent operation. -u and -v are ignored in this mode. -a must not be used with -s. -SIGNAL Use the specified signal instead of SIGKILL when killing processes. Signals can be specified either by name (e.g., -HUP) or by number (e.g., -1). This option is silently ignored if the -k option is not used. -u, --user Append the user name of the process owner to each PID. -v, --verbose Verbose mode. Processes are shown in a ps-like style. The fields PID, USER and COMMAND are similar to ps. ACCESS shows how the process accesses the file. Verbose mode will also show when a particular file is being accessed as a mount point, knfs export or swap file. In this case kernel is shown instead of the PID. -V, --version Display version information. -4, --ipv4 Search only for IPv4 sockets. This option must not be used with the -6 option and only has an effect with the tcp and udp namespaces. -6, --ipv6 Search only for IPv6 sockets. This option must not be used with the -4 option and only has an effect with the tcp and udp namespaces. FILES top /proc location of the proc file system EXAMPLES top fuser -km /home kills all processes accessing the file system /home in any way. if fuser -s /dev/ttyS1; then :; else command; fi invokes command if no other process is using /dev/ttyS1. fuser telnet/tcp shows all processes at the (local) TELNET port. RESTRICTIONS top Processes accessing the same file or file system several times in the same way are only shown once. If the same object is specified several times on the command line, some of those entries may be ignored. fuser may only be able to gather partial information unless run with privileges. As a consequence, files opened by processes belonging to other users may not be listed and executables may be classified as mapped only. fuser cannot report on any processes that it doesn't have permission to look at the file descriptor table for. The most common time this problem occurs is when looking for TCP or UDP sockets when running fuser as a non-root user. In this case fuser will report no access. Installing fuser SUID root will avoid problems associated with partial information, but may be undesirable for security and privacy reasons. udp and tcp name spaces, and UNIX domain sockets can't be searched with kernels older than 1.3.78. Accesses by the kernel are only shown with the -v option. The -k option only works on processes. If the user is the kernel, fuser will print an advice, but take no action beyond that. fuser will not see block devices mounted by processes in a different mount namespace. This is due to the device ID shown in the process' file descriptor table being from the process namespace, not fuser's; meaning it won't match. BUGS top fuser -m /dev/sgX will show (or kill with the -k flag) all processes, even if you don't have that device configured. There may be other devices it does this for too. The mount -m option will match any file within the same device as the specified file, use the -M option as well if you mean to specify only the mount point. fuser will not match mapped files, such as a process' shared libraries if they are on a btrfs(5) filesystem due to the device IDs being different for stat(2) and /proc/<PID>/maps. SEE ALSO top kill(1), killall(1), stat(2), btrfs(5), lsof(8), mount_namespaces(7), pkill(1), ps(1), kill(2). COLOPHON top This page is part of the psmisc (Small utilities that use the /proc filesystem) project. Information about the project can be found at https://gitlab.com/psmisc/psmisc. If you have a bug report for this manual page, see https://gitlab.com/psmisc/psmisc/issues. This page was obtained from the project's upstream Git repository https://gitlab.com/psmisc/psmisc.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org psmisc 2022-11-02 FUSER(1) Pages that refer to this page: killall(1), lsof(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# fuser\n\n> Display process IDs currently using files or sockets.\n> More information: <https://manned.org/fuser>.\n\n- Find which processes are accessing a file or directory:\n\n`fuser {{path/to/file_or_directory}}`\n\n- Show more fields (`USER`, `PID`, `ACCESS` and `COMMAND`):\n\n`fuser --verbose {{path/to/file_or_directory}}`\n\n- Identify processes using a TCP socket:\n\n`fuser --namespace tcp {{port}}`\n\n- Kill all processes accessing a file or directory (sends the `SIGKILL` signal):\n\n`fuser --kill {{path/to/file_or_directory}}`\n\n- Find which processes are accessing the filesystem containing a specific file or directory:\n\n`fuser --mount {{path/to/file_or_directory}}`\n\n- Kill all processes with a TCP connection on a specific port:\n\n`fuser --kill {{port}}/tcp`\n
g++
g++(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training g++(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT | BUGS | FOOTNOTES | SEE ALSO | AUTHOR | COPYRIGHT | COLOPHON GCC(1) GNU GCC(1) NAME top gcc - GNU project C and C++ compiler SYNOPSIS top gcc [-c|-S|-E] [-std=standard] [-g] [-pg] [-Olevel] [-Wwarn...] [-Wpedantic] [-Idir...] [-Ldir...] [-Dmacro[=defn]...] [-Umacro] [-foption...] [-mmachine-option...] [-o outfile] [@file] infile... Only the most useful options are listed here; see below for the remainder. g++ accepts mostly the same options as gcc. DESCRIPTION top When you invoke GCC, it normally does preprocessing, compilation, assembly and linking. The "overall options" allow you to stop this process at an intermediate stage. For example, the -c option says not to run the linker. Then the output consists of object files output by the assembler. Other options are passed on to one or more stages of processing. Some options control the preprocessor and others the compiler itself. Yet other options control the assembler and linker; most of these are not documented here, since you rarely need to use any of them. Most of the command-line options that you can use with GCC are useful for C programs; when an option is only useful with another language (usually C++), the explanation says so explicitly. If the description for a particular option does not mention a source language, you can use that option with all supported languages. The usual way to run GCC is to run the executable called gcc, or machine-gcc when cross-compiling, or machine-gcc-version to run a specific version of GCC. When you compile C++ programs, you should invoke GCC as g++ instead. The gcc program accepts options and file names as operands. Many options have multi-letter names; therefore multiple single-letter options may not be grouped: -dv is very different from -d -v. You can mix options and other arguments. For the most part, the order you use doesn't matter. Order does matter when you use several options of the same kind; for example, if you specify -L more than once, the directories are searched in the order specified. Also, the placement of the -l option is significant. Many options have long names starting with -f or with -W---for example, -fmove-loop-invariants, -Wformat and so on. Most of these have both positive and negative forms; the negative form of -ffoo is -fno-foo. This manual documents only one of these two forms, whichever one is not the default. Some options take one or more arguments typically separated either by a space or by the equals sign (=) from the option name. Unless documented otherwise, an argument can be either numeric or a string. Numeric arguments must typically be small unsigned decimal or hexadecimal integers. Hexadecimal arguments must begin with the 0x prefix. Arguments to options that specify a size threshold of some sort may be arbitrarily large decimal or hexadecimal integers followed by a byte size suffix designating a multiple of bytes such as "kB" and "KiB" for kilobyte and kibibyte, respectively, "MB" and "MiB" for megabyte and mebibyte, "GB" and "GiB" for gigabyte and gigibyte, and so on. Such arguments are designated by byte-size in the following text. Refer to the NIST, IEC, and other relevant national and international standards for the full listing and explanation of the binary and decimal byte size prefixes. OPTIONS top Option Summary Here is a summary of all the options, grouped by type. Explanations are in the following sections. Overall Options -c -S -E -o file -x language -v -### --help[=class[,...]] --target-help --version -pass-exit-codes -pipe -specs=file -wrapper @file -ffile-prefix-map=old=new -fplugin=file -fplugin-arg-name=arg -fdump-ada-spec[-slim] -fada-spec-parent=unit -fdump-go-spec=file C Language Options -ansi -std=standard -fgnu89-inline -fpermitted-flt-eval-methods=standard -aux-info filename -fallow-parameterless-variadic-functions -fno-asm -fno-builtin -fno-builtin-function -fgimple -fhosted -ffreestanding -fopenacc -fopenacc-dim=geom -fopenmp -fopenmp-simd -fms-extensions -fplan9-extensions -fsso-struct=endianness -fallow-single-precision -fcond-mismatch -flax-vector-conversions -fsigned-bitfields -fsigned-char -funsigned-bitfields -funsigned-char C++ Language Options -fabi-version=n -fno-access-control -faligned-new=n -fargs-in-order=n -fchar8_t -fcheck-new -fconstexpr-depth=n -fconstexpr-loop-limit=n -fconstexpr-ops-limit=n -fno-elide-constructors -fno-enforce-eh-specs -fno-gnu-keywords -fno-implicit-templates -fno-implicit-inline-templates -fno-implement-inlines -fms-extensions -fnew-inheriting-ctors -fnew-ttp-matching -fno-nonansi-builtins -fnothrow-opt -fno-operator-names -fno-optional-diags -fpermissive -fno-pretty-templates -frepo -fno-rtti -fsized-deallocation -ftemplate-backtrace-limit=n -ftemplate-depth=n -fno-threadsafe-statics -fuse-cxa-atexit -fno-weak -nostdinc++ -fvisibility-inlines-hidden -fvisibility-ms-compat -fext-numeric-literals -Wabi=n -Wabi-tag -Wconversion-null -Wctor-dtor-privacy -Wdelete-non-virtual-dtor -Wdeprecated-copy -Wdeprecated-copy-dtor -Wliteral-suffix -Wmultiple-inheritance -Wno-init-list-lifetime -Wnamespaces -Wnarrowing -Wpessimizing-move -Wredundant-move -Wnoexcept -Wnoexcept-type -Wclass-memaccess -Wnon-virtual-dtor -Wreorder -Wregister -Weffc++ -Wstrict-null-sentinel -Wtemplates -Wno-non-template-friend -Wold-style-cast -Woverloaded-virtual -Wno-pmf-conversions -Wno-class-conversion -Wno-terminate -Wsign-promo -Wvirtual-inheritance Objective-C and Objective-C++ Language Options -fconstant-string-class=class-name -fgnu-runtime -fnext-runtime -fno-nil-receivers -fobjc-abi-version=n -fobjc-call-cxx-cdtors -fobjc-direct-dispatch -fobjc-exceptions -fobjc-gc -fobjc-nilcheck -fobjc-std=objc1 -fno-local-ivars -fivar-visibility=[public|protected|private|package] -freplace-objc-classes -fzero-link -gen-decls -Wassign-intercept -Wno-protocol -Wselector -Wstrict-selector-match -Wundeclared-selector Diagnostic Message Formatting Options -fmessage-length=n -fdiagnostics-show-location=[once|every- line] -fdiagnostics-color=[auto|never|always] -fdiagnostics-format=[text|json] -fno-diagnostics-show-option -fno-diagnostics-show-caret -fno-diagnostics-show-labels -fno-diagnostics-show-line-numbers -fdiagnostics-minimum-margin-width=width -fdiagnostics-parseable-fixits -fdiagnostics-generate-patch -fdiagnostics-show-template-tree -fno-elide-type -fno-show-column Warning Options -fsyntax-only -fmax-errors=n -Wpedantic -pedantic-errors -w -Wextra -Wall -Waddress -Waddress-of-packed-member -Waggregate-return -Waligned-new -Walloc-zero -Walloc-size-larger-than=byte-size -Walloca -Walloca-larger-than=byte-size -Wno-aggressive-loop-optimizations -Warray-bounds -Warray-bounds=n -Wno-attributes -Wattribute-alias=n -Wbool-compare -Wbool-operation -Wno-builtin-declaration-mismatch -Wno-builtin-macro-redefined -Wc90-c99-compat -Wc99-c11-compat -Wc11-c2x-compat -Wc++-compat -Wc++11-compat -Wc++14-compat -Wc++17-compat -Wcast-align -Wcast-align=strict -Wcast-function-type -Wcast-qual -Wchar-subscripts -Wcatch-value -Wcatch-value=n -Wclobbered -Wcomment -Wconditionally-supported -Wconversion -Wcoverage-mismatch -Wno-cpp -Wdangling-else -Wdate-time -Wdelete-incomplete -Wno-attribute-warning -Wno-deprecated -Wno-deprecated-declarations -Wno-designated-init -Wdisabled-optimization -Wno-discarded-qualifiers -Wno-discarded-array-qualifiers -Wno-div-by-zero -Wdouble-promotion -Wduplicated-branches -Wduplicated-cond -Wempty-body -Wenum-compare -Wno-endif-labels -Wexpansion-to-defined -Werror -Werror=* -Wextra-semi -Wfatal-errors -Wfloat-equal -Wformat -Wformat=2 -Wno-format-contains-nul -Wno-format-extra-args -Wformat-nonliteral -Wformat-overflow=n -Wformat-security -Wformat-signedness -Wformat-truncation=n -Wformat-y2k -Wframe-address -Wframe-larger-than=byte-size -Wno-free-nonheap-object -Wjump-misses-init -Whsa -Wif-not-aligned -Wignored-qualifiers -Wignored-attributes -Wincompatible-pointer-types -Wimplicit -Wimplicit-fallthrough -Wimplicit-fallthrough=n -Wimplicit-function-declaration -Wimplicit-int -Winit-self -Winline -Wno-int-conversion -Wint-in-bool-context -Wno-int-to-pointer-cast -Winvalid-memory-model -Wno-invalid-offsetof -Winvalid-pch -Wlarger-than=byte-size -Wlogical-op -Wlogical-not-parentheses -Wlong-long -Wmain -Wmaybe-uninitialized -Wmemset-elt-size -Wmemset-transposed-args -Wmisleading-indentation -Wmissing-attributes -Wmissing-braces -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-include-dirs -Wmissing-noreturn -Wmissing-profile -Wno-multichar -Wmultistatement-macros -Wnonnull -Wnonnull-compare -Wnormalized=[none|id|nfc|nfkc] -Wnull-dereference -Wodr -Wno-overflow -Wopenmp-simd -Woverride-init-side-effects -Woverlength-strings -Wpacked -Wpacked-bitfield-compat -Wpacked-not-aligned -Wpadded -Wparentheses -Wno-pedantic-ms-format -Wplacement-new -Wplacement-new=n -Wpointer-arith -Wpointer-compare -Wno-pointer-to-int-cast -Wno-pragmas -Wno-prio-ctor-dtor -Wredundant-decls -Wrestrict -Wno-return-local-addr -Wreturn-type -Wsequence-point -Wshadow -Wno-shadow-ivar -Wshadow=global, -Wshadow=local, -Wshadow=compatible-local -Wshift-overflow -Wshift-overflow=n -Wshift-count-negative -Wshift-count-overflow -Wshift-negative-value -Wsign-compare -Wsign-conversion -Wfloat-conversion -Wno-scalar-storage-order -Wsizeof-pointer-div -Wsizeof-pointer-memaccess -Wsizeof-array-argument -Wstack-protector -Wstack-usage=byte-size -Wstrict-aliasing -Wstrict-aliasing=n -Wstrict-overflow -Wstrict-overflow=n -Wstringop-overflow=n -Wstringop-truncation -Wsubobject-linkage -Wsuggest-attribute=[pure|const|noreturn|format|malloc] -Wsuggest-final-types -Wsuggest-final-methods -Wsuggest-override -Wswitch -Wswitch-bool -Wswitch-default -Wswitch-enum -Wswitch-unreachable -Wsync-nand -Wsystem-headers -Wtautological-compare -Wtrampolines -Wtrigraphs -Wtype-limits -Wundef -Wuninitialized -Wunknown-pragmas -Wunsuffixed-float-constants -Wunused -Wunused-function -Wunused-label -Wunused-local-typedefs -Wunused-macros -Wunused-parameter -Wno-unused-result -Wunused-value -Wunused-variable -Wunused-const-variable -Wunused-const-variable=n -Wunused-but-set-parameter -Wunused-but-set-variable -Wuseless-cast -Wvariadic-macros -Wvector-operation-performance -Wvla -Wvla-larger-than=byte- size -Wvolatile-register-var -Wwrite-strings -Wzero-as-null-pointer-constant C and Objective-C-only Warning Options -Wbad-function-cast -Wmissing-declarations -Wmissing-parameter-type -Wmissing-prototypes -Wnested-externs -Wold-style-declaration -Wold-style-definition -Wstrict-prototypes -Wtraditional -Wtraditional-conversion -Wdeclaration-after-statement -Wpointer-sign Debugging Options -g -glevel -gdwarf -gdwarf-version -ggdb -grecord-gcc-switches -gno-record-gcc-switches -gstabs -gstabs+ -gstrict-dwarf -gno-strict-dwarf -gas-loc-support -gno-as-loc-support -gas-locview-support -gno-as-locview-support -gcolumn-info -gno-column-info -gstatement-frontiers -gno-statement-frontiers -gvariable-location-views -gno-variable-location-views -ginternal-reset-location-views -gno-internal-reset-location-views -ginline-points -gno-inline-points -gvms -gxcoff -gxcoff+ -gz[=type] -gsplit-dwarf -gdescribe-dies -gno-describe-dies -fdebug-prefix-map=old=new -fdebug-types-section -fno-eliminate-unused-debug-types -femit-struct-debug-baseonly -femit-struct-debug-reduced -femit-struct-debug-detailed[=spec-list] -feliminate-unused-debug-symbols -femit-class-debug-always -fno-merge-debug-strings -fno-dwarf2-cfi-asm -fvar-tracking -fvar-tracking-assignments Optimization Options -faggressive-loop-optimizations -falign-functions[=n[:m:[n2[:m2]]]] -falign-jumps[=n[:m:[n2[:m2]]]] -falign-labels[=n[:m:[n2[:m2]]]] -falign-loops[=n[:m:[n2[:m2]]]] -fassociative-math -fauto-profile -fauto-profile[=path] -fauto-inc-dec -fbranch-probabilities -fbranch-target-load-optimize -fbranch-target-load-optimize2 -fbtr-bb-exclusive -fcaller-saves -fcombine-stack-adjustments -fconserve-stack -fcompare-elim -fcprop-registers -fcrossjumping -fcse-follow-jumps -fcse-skip-blocks -fcx-fortran-rules -fcx-limited-range -fdata-sections -fdce -fdelayed-branch -fdelete-null-pointer-checks -fdevirtualize -fdevirtualize-speculatively -fdevirtualize-at-ltrans -fdse -fearly-inlining -fipa-sra -fexpensive-optimizations -ffat-lto-objects -ffast-math -ffinite-math-only -ffloat-store -fexcess-precision=style -fforward-propagate -ffp-contract=style -ffunction-sections -fgcse -fgcse-after-reload -fgcse-las -fgcse-lm -fgraphite-identity -fgcse-sm -fhoist-adjacent-loads -fif-conversion -fif-conversion2 -findirect-inlining -finline-functions -finline-functions-called-once -finline-limit=n -finline-small-functions -fipa-cp -fipa-cp-clone -fipa-bit-cp -fipa-vrp -fipa-pta -fipa-profile -fipa-pure-const -fipa-reference -fipa-reference-addressable -fipa-stack-alignment -fipa-icf -fira-algorithm=algorithm -flive-patching=level -fira-region=region -fira-hoist-pressure -fira-loop-pressure -fno-ira-share-save-slots -fno-ira-share-spill-slots -fisolate-erroneous-paths-dereference -fisolate-erroneous-paths-attribute -fivopts -fkeep-inline-functions -fkeep-static-functions -fkeep-static-consts -flimit-function-alignment -flive-range-shrinkage -floop-block -floop-interchange -floop-strip-mine -floop-unroll-and-jam -floop-nest-optimize -floop-parallelize-all -flra-remat -flto -flto-compression-level -flto-partition=alg -fmerge-all-constants -fmerge-constants -fmodulo-sched -fmodulo-sched-allow-regmoves -fmove-loop-invariants -fno-branch-count-reg -fno-defer-pop -fno-fp-int-builtin-inexact -fno-function-cse -fno-guess-branch-probability -fno-inline -fno-math-errno -fno-peephole -fno-peephole2 -fno-printf-return-value -fno-sched-interblock -fno-sched-spec -fno-signed-zeros -fno-toplevel-reorder -fno-trapping-math -fno-zero-initialized-in-bss -fomit-frame-pointer -foptimize-sibling-calls -fpartial-inlining -fpeel-loops -fpredictive-commoning -fprefetch-loop-arrays -fprofile-correction -fprofile-use -fprofile-use=path -fprofile-values -fprofile-reorder-functions -freciprocal-math -free -frename-registers -freorder-blocks -freorder-blocks-algorithm=algorithm -freorder-blocks-and-partition -freorder-functions -frerun-cse-after-loop -freschedule-modulo-scheduled-loops -frounding-math -fsave-optimization-record -fsched2-use-superblocks -fsched-pressure -fsched-spec-load -fsched-spec-load-dangerous -fsched-stalled-insns-dep[=n] -fsched-stalled-insns[=n] -fsched-group-heuristic -fsched-critical-path-heuristic -fsched-spec-insn-heuristic -fsched-rank-heuristic -fsched-last-insn-heuristic -fsched-dep-count-heuristic -fschedule-fusion -fschedule-insns -fschedule-insns2 -fsection-anchors -fselective-scheduling -fselective-scheduling2 -fsel-sched-pipelining -fsel-sched-pipelining-outer-loops -fsemantic-interposition -fshrink-wrap -fshrink-wrap-separate -fsignaling-nans -fsingle-precision-constant -fsplit-ivs-in-unroller -fsplit-loops -fsplit-paths -fsplit-wide-types -fssa-backprop -fssa-phiopt -fstdarg-opt -fstore-merging -fstrict-aliasing -fthread-jumps -ftracer -ftree-bit-ccp -ftree-builtin-call-dce -ftree-ccp -ftree-ch -ftree-coalesce-vars -ftree-copy-prop -ftree-dce -ftree-dominator-opts -ftree-dse -ftree-forwprop -ftree-fre -fcode-hoisting -ftree-loop-if-convert -ftree-loop-im -ftree-phiprop -ftree-loop-distribution -ftree-loop-distribute-patterns -ftree-loop-ivcanon -ftree-loop-linear -ftree-loop-optimize -ftree-loop-vectorize -ftree-parallelize-loops=n -ftree-pre -ftree-partial-pre -ftree-pta -ftree-reassoc -ftree-scev-cprop -ftree-sink -ftree-slsr -ftree-sra -ftree-switch-conversion -ftree-tail-merge -ftree-ter -ftree-vectorize -ftree-vrp -funconstrained-commons -funit-at-a-time -funroll-all-loops -funroll-loops -funsafe-math-optimizations -funswitch-loops -fipa-ra -fvariable-expansion-in-unroller -fvect-cost-model -fvpt -fweb -fwhole-program -fwpa -fuse-linker-plugin --param name=value -O -O0 -O1 -O2 -O3 -Os -Ofast -Og Program Instrumentation Options -p -pg -fprofile-arcs --coverage -ftest-coverage -fprofile-abs-path -fprofile-dir=path -fprofile-generate -fprofile-generate=path -fprofile-update=method -fprofile-filter-files=regex -fprofile-exclude-files=regex -fsanitize=style -fsanitize-recover -fsanitize-recover=style -fasan-shadow-offset=number -fsanitize-sections=s1,s2,... -fsanitize-undefined-trap-on-error -fbounds-check -fcf-protection=[full|branch|return|none] -fstack-protector -fstack-protector-all -fstack-protector-strong -fstack-protector-explicit -fstack-check -fstack-limit-register=reg -fstack-limit-symbol=sym -fno-stack-limit -fsplit-stack -fvtable-verify=[std|preinit|none] -fvtv-counts -fvtv-debug -finstrument-functions -finstrument-functions-exclude-function-list=sym,sym,... -finstrument-functions-exclude-file-list=file,file,... Preprocessor Options -Aquestion=answer -A-question[=answer] -C -CC -Dmacro[=defn] -dD -dI -dM -dN -dU -fdebug-cpp -fdirectives-only -fdollars-in-identifiers -fexec-charset=charset -fextended-identifiers -finput-charset=charset -fmacro-prefix-map=old=new -fno-canonical-system-headers -fpch-deps -fpch-preprocess -fpreprocessed -ftabstop=width -ftrack-macro-expansion -fwide-exec-charset=charset -fworking-directory -H -imacros file -include file -M -MD -MF -MG -MM -MMD -MP -MQ -MT -no-integrated-cpp -P -pthread -remap -traditional -traditional-cpp -trigraphs -Umacro -undef -Wp,option -Xpreprocessor option Assembler Options -Wa,option -Xassembler option Linker Options object-file-name -fuse-ld=linker -llibrary -nostartfiles -nodefaultlibs -nolibc -nostdlib -e entry --entry=entry -pie -pthread -r -rdynamic -s -static -static-pie -static-libgcc -static-libstdc++ -static-libasan -static-libtsan -static-liblsan -static-libubsan -shared -shared-libgcc -symbolic -T script -Wl,option -Xlinker option -u symbol -z keyword Directory Options -Bprefix -Idir -I- -idirafter dir -imacros file -imultilib dir -iplugindir=dir -iprefix file -iquote dir -isysroot dir -isystem dir -iwithprefix dir -iwithprefixbefore dir -Ldir -no-canonical-prefixes --no-sysroot-suffix -nostdinc -nostdinc++ --sysroot=dir Code Generation Options -fcall-saved-reg -fcall-used-reg -ffixed-reg -fexceptions -fnon-call-exceptions -fdelete-dead-exceptions -funwind-tables -fasynchronous-unwind-tables -fno-gnu-unique -finhibit-size-directive -fno-common -fno-ident -fpcc-struct-return -fpic -fPIC -fpie -fPIE -fno-plt -fno-jump-tables -frecord-gcc-switches -freg-struct-return -fshort-enums -fshort-wchar -fverbose-asm -fpack-struct[=n] -fleading-underscore -ftls-model=model -fstack-reuse=reuse_level -ftrampolines -ftrapv -fwrapv -fvisibility=[default|internal|hidden|protected] -fstrict-volatile-bitfields -fsync-libcalls Developer Options -dletters -dumpspecs -dumpmachine -dumpversion -dumpfullversion -fchecking -fchecking=n -fdbg-cnt-list -fdbg-cnt=counter-value-list -fdisable-ipa-pass_name -fdisable-rtl-pass_name -fdisable-rtl-pass-name=range-list -fdisable-tree-pass_name -fdisable-tree-pass-name=range-list -fdump-debug -fdump-earlydebug -fdump-noaddr -fdump-unnumbered -fdump-unnumbered-links -fdump-final-insns[=file] -fdump-ipa-all -fdump-ipa-cgraph -fdump-ipa-inline -fdump-lang-all -fdump-lang-switch -fdump-lang-switch-options -fdump-lang-switch-options=filename -fdump-passes -fdump-rtl-pass -fdump-rtl-pass=filename -fdump-statistics -fdump-tree-all -fdump-tree-switch -fdump-tree-switch-options -fdump-tree-switch-options=filename -fcompare-debug[=opts] -fcompare-debug-second -fenable-kind-pass -fenable-kind-pass=range-list -fira-verbose=n -flto-report -flto-report-wpa -fmem-report-wpa -fmem-report -fpre-ipa-mem-report -fpost-ipa-mem-report -fopt-info -fopt-info-options[=file] -fprofile-report -frandom-seed=string -fsched-verbose=n -fsel-sched-verbose -fsel-sched-dump-cfg -fsel-sched-pipelining-verbose -fstats -fstack-usage -ftime-report -ftime-report-details -fvar-tracking-assignments-toggle -gtoggle -print-file-name=library -print-libgcc-file-name -print-multi-directory -print-multi-lib -print-multi-os-directory -print-prog-name=program -print-search-dirs -Q -print-sysroot -print-sysroot-headers-suffix -save-temps -save-temps=cwd -save-temps=obj -time[=file] Machine-Dependent Options AArch64 Options -mabi=name -mbig-endian -mlittle-endian -mgeneral-regs-only -mcmodel=tiny -mcmodel=small -mcmodel=large -mstrict-align -mno-strict-align -momit-leaf-frame-pointer -mtls-dialect=desc -mtls-dialect=traditional -mtls-size=size -mfix-cortex-a53-835769 -mfix-cortex-a53-843419 -mlow-precision-recip-sqrt -mlow-precision-sqrt -mlow-precision-div -mpc-relative-literal-loads -msign-return-address=scope -mbranch-protection=none|standard|pac-ret[+leaf]|bti -mharden-sls=opts -march=name -mcpu=name -mtune=name -moverride=string -mverbose-cost-dump -mstack-protector-guard=guard -mstack-protector-guard-reg=sysreg -mstack-protector-guard-offset=offset -mtrack-speculation -moutline-atomics Adapteva Epiphany Options -mhalf-reg-file -mprefer-short-insn-regs -mbranch-cost=num -mcmove -mnops=num -msoft-cmpsf -msplit-lohi -mpost-inc -mpost-modify -mstack-offset=num -mround-nearest -mlong-calls -mshort-calls -msmall16 -mfp-mode=mode -mvect-double -max-vect-align=num -msplit-vecmove-early -m1reg-reg AMD GCN Options -march=gpu -mtune=gpu -mstack-size=bytes ARC Options -mbarrel-shifter -mjli-always -mcpu=cpu -mA6 -mARC600 -mA7 -mARC700 -mdpfp -mdpfp-compact -mdpfp-fast -mno-dpfp-lrsr -mea -mno-mpy -mmul32x16 -mmul64 -matomic -mnorm -mspfp -mspfp-compact -mspfp-fast -msimd -msoft-float -mswap -mcrc -mdsp-packa -mdvbf -mlock -mmac-d16 -mmac-24 -mrtsc -mswape -mtelephony -mxy -misize -mannotate-align -marclinux -marclinux_prof -mlong-calls -mmedium-calls -msdata -mirq-ctrl-saved -mrgf-banked-regs -mlpc-width=width -G num -mvolatile-cache -mtp-regno=regno -malign-call -mauto-modify-reg -mbbit-peephole -mno-brcc -mcase-vector-pcrel -mcompact-casesi -mno-cond-exec -mearly-cbranchsi -mexpand-adddi -mindexed-loads -mlra -mlra-priority-none -mlra-priority-compact mlra-priority-noncompact -mmillicode -mmixed-code -mq-class -mRcq -mRcw -msize-level=level -mtune=cpu -mmultcost=num -mcode-density-frame -munalign-prob-threshold=probability -mmpy-option=multo -mdiv-rem -mcode-density -mll64 -mfpu=fpu -mrf16 -mbranch-index ARM Options -mapcs-frame -mno-apcs-frame -mabi=name -mapcs-stack-check -mno-apcs-stack-check -mapcs-reentrant -mno-apcs-reentrant -mgeneral-regs-only -msched-prolog -mno-sched-prolog -mlittle-endian -mbig-endian -mbe8 -mbe32 -mfloat-abi=name -mfp16-format=name -mthumb-interwork -mno-thumb-interwork -mcpu=name -march=name -mfpu=name -mtune=name -mprint-tune-info -mstructure-size-boundary=n -mabort-on-noreturn -mlong-calls -mno-long-calls -msingle-pic-base -mno-single-pic-base -mpic-register=reg -mnop-fun-dllimport -mpoke-function-name -mthumb -marm -mflip-thumb -mtpcs-frame -mtpcs-leaf-frame -mcaller-super-interworking -mcallee-super-interworking -mtp=name -mtls-dialect=dialect -mword-relocations -mfix-cortex-m3-ldrd -munaligned-access -mneon-for-64bits -mslow-flash-data -masm-syntax-unified -mrestrict-it -mverbose-cost-dump -mpure-code -mcmse AVR Options -mmcu=mcu -mabsdata -maccumulate-args -mbranch-cost=cost -mcall-prologues -mgas-isr-prologues -mint8 -mn_flash=size -mno-interrupts -mmain-is-OS_task -mrelax -mrmw -mstrict-X -mtiny-stack -mfract-convert-truncate -mshort-calls -nodevicelib -nodevicespecs -Waddr-space-convert -Wmisspelled-isr Blackfin Options -mcpu=cpu[-sirevision] -msim -momit-leaf-frame-pointer -mno-omit-leaf-frame-pointer -mspecld-anomaly -mno-specld-anomaly -mcsync-anomaly -mno-csync-anomaly -mlow-64k -mno-low64k -mstack-check-l1 -mid-shared-library -mno-id-shared-library -mshared-library-id=n -mleaf-id-shared-library -mno-leaf-id-shared-library -msep-data -mno-sep-data -mlong-calls -mno-long-calls -mfast-fp -minline-plt -mmulticore -mcorea -mcoreb -msdram -micplb C6X Options -mbig-endian -mlittle-endian -march=cpu -msim -msdata=sdata-type CRIS Options -mcpu=cpu -march=cpu -mtune=cpu -mmax-stack-frame=n -melinux-stacksize=n -metrax4 -metrax100 -mpdebug -mcc-init -mno-side-effects -mstack-align -mdata-align -mconst-align -m32-bit -m16-bit -m8-bit -mno-prologue-epilogue -mno-gotplt -melf -maout -melinux -mlinux -sim -sim2 -mmul-bug-workaround -mno-mul-bug-workaround CR16 Options -mmac -mcr16cplus -mcr16c -msim -mint32 -mbit-ops -mdata-model=model C-SKY Options -march=arch -mcpu=cpu -mbig-endian -EB -mlittle-endian -EL -mhard-float -msoft-float -mfpu=fpu -mdouble-float -mfdivdu -melrw -mistack -mmp -mcp -mcache -msecurity -mtrust -mdsp -medsp -mvdsp -mdiv -msmart -mhigh-registers -manchor -mpushpop -mmultiple-stld -mconstpool -mstack-size -mccrt -mbranch-cost=n -mcse-cc -msched-prolog Darwin Options -all_load -allowable_client -arch -arch_errors_fatal -arch_only -bind_at_load -bundle -bundle_loader -client_name -compatibility_version -current_version -dead_strip -dependency-file -dylib_file -dylinker_install_name -dynamic -dynamiclib -exported_symbols_list -filelist -flat_namespace -force_cpusubtype_ALL -force_flat_namespace -headerpad_max_install_names -iframework -image_base -init -install_name -keep_private_externs -multi_module -multiply_defined -multiply_defined_unused -noall_load -no_dead_strip_inits_and_terms -nofixprebinding -nomultidefs -noprebind -noseglinkedit -pagezero_size -prebind -prebind_all_twolevel_modules -private_bundle -read_only_relocs -sectalign -sectobjectsymbols -whyload -seg1addr -sectcreate -sectobjectsymbols -sectorder -segaddr -segs_read_only_addr -segs_read_write_addr -seg_addr_table -seg_addr_table_filename -seglinkedit -segprot -segs_read_only_addr -segs_read_write_addr -single_module -static -sub_library -sub_umbrella -twolevel_namespace -umbrella -undefined -unexported_symbols_list -weak_reference_mismatches -whatsloaded -F -gused -gfull -mmacosx-version-min=version -mkernel -mone-byte-bool DEC Alpha Options -mno-fp-regs -msoft-float -mieee -mieee-with-inexact -mieee-conformant -mfp-trap-mode=mode -mfp-rounding-mode=mode -mtrap-precision=mode -mbuild-constants -mcpu=cpu-type -mtune=cpu-type -mbwx -mmax -mfix -mcix -mfloat-vax -mfloat-ieee -mexplicit-relocs -msmall-data -mlarge-data -msmall-text -mlarge-text -mmemory-latency=time FR30 Options -msmall-model -mno-lsim FT32 Options -msim -mlra -mnodiv -mft32b -mcompress -mnopm FRV Options -mgpr-32 -mgpr-64 -mfpr-32 -mfpr-64 -mhard-float -msoft-float -malloc-cc -mfixed-cc -mdword -mno-dword -mdouble -mno-double -mmedia -mno-media -mmuladd -mno-muladd -mfdpic -minline-plt -mgprel-ro -multilib-library-pic -mlinked-fp -mlong-calls -malign-labels -mlibrary-pic -macc-4 -macc-8 -mpack -mno-pack -mno-eflags -mcond-move -mno-cond-move -moptimize-membar -mno-optimize-membar -mscc -mno-scc -mcond-exec -mno-cond-exec -mvliw-branch -mno-vliw-branch -mmulti-cond-exec -mno-multi-cond-exec -mnested-cond-exec -mno-nested-cond-exec -mtomcat-stats -mTLS -mtls -mcpu=cpu GNU/Linux Options -mglibc -muclibc -mmusl -mbionic -mandroid -tno-android-cc -tno-android-ld H8/300 Options -mrelax -mh -ms -mn -mexr -mno-exr -mint32 -malign-300 HPPA Options -march=architecture-type -mcaller-copies -mdisable-fpregs -mdisable-indexing -mfast-indirect-calls -mgas -mgnu-ld -mhp-ld -mfixed-range=register-range -mjump-in-delay -mlinker-opt -mlong-calls -mlong-load-store -mno-disable-fpregs -mno-disable-indexing -mno-fast-indirect-calls -mno-gas -mno-jump-in-delay -mno-long-load-store -mno-portable-runtime -mno-soft-float -mno-space-regs -msoft-float -mpa-risc-1-0 -mpa-risc-1-1 -mpa-risc-2-0 -mportable-runtime -mschedule=cpu-type -mspace-regs -msio -mwsio -munix=unix-std -nolibdld -static -threads IA-64 Options -mbig-endian -mlittle-endian -mgnu-as -mgnu-ld -mno-pic -mvolatile-asm-stop -mregister-names -msdata -mno-sdata -mconstant-gp -mauto-pic -mfused-madd -minline-float-divide-min-latency -minline-float-divide-max-throughput -mno-inline-float-divide -minline-int-divide-min-latency -minline-int-divide-max-throughput -mno-inline-int-divide -minline-sqrt-min-latency -minline-sqrt-max-throughput -mno-inline-sqrt -mdwarf2-asm -mearly-stop-bits -mfixed-range=register-range -mtls-size=tls-size -mtune=cpu- type -milp32 -mlp64 -msched-br-data-spec -msched-ar-data-spec -msched-control-spec -msched-br-in-data-spec -msched-ar-in-data-spec -msched-in-control-spec -msched-spec-ldc -msched-spec-control-ldc -msched-prefer-non-data-spec-insns -msched-prefer-non-control-spec-insns -msched-stop-bits-after-every-cycle -msched-count-spec-in-critical-path -msel-sched-dont-check-control-spec -msched-fp-mem-deps-zero-cost -msched-max-memory-insns-hard-limit -msched-max-memory-insns=max-insns LM32 Options -mbarrel-shift-enabled -mdivide-enabled -mmultiply-enabled -msign-extend-enabled -muser-enabled M32R/D Options -m32r2 -m32rx -m32r -mdebug -malign-loops -mno-align-loops -missue-rate=number -mbranch-cost=number -mmodel=code-size-model-type -msdata=sdata-type -mno-flush-func -mflush-func=name -mno-flush-trap -mflush-trap=number -G num M32C Options -mcpu=cpu -msim -memregs=number M680x0 Options -march=arch -mcpu=cpu -mtune=tune -m68000 -m68020 -m68020-40 -m68020-60 -m68030 -m68040 -m68060 -mcpu32 -m5200 -m5206e -m528x -m5307 -m5407 -mcfv4e -mbitfield -mno-bitfield -mc68000 -mc68020 -mnobitfield -mrtd -mno-rtd -mdiv -mno-div -mshort -mno-short -mhard-float -m68881 -msoft-float -mpcrel -malign-int -mstrict-align -msep-data -mno-sep-data -mshared-library-id=n -mid-shared-library -mno-id-shared-library -mxgot -mno-xgot -mlong-jump-table-offsets MCore Options -mhardlit -mno-hardlit -mdiv -mno-div -mrelax-immediates -mno-relax-immediates -mwide-bitfields -mno-wide-bitfields -m4byte-functions -mno-4byte-functions -mcallgraph-data -mno-callgraph-data -mslow-bytes -mno-slow-bytes -mno-lsim -mlittle-endian -mbig-endian -m210 -m340 -mstack-increment MeP Options -mabsdiff -mall-opts -maverage -mbased=n -mbitops -mc=n -mclip -mconfig=name -mcop -mcop32 -mcop64 -mivc2 -mdc -mdiv -meb -mel -mio-volatile -ml -mleadz -mm -mminmax -mmult -mno-opts -mrepeat -ms -msatur -msdram -msim -msimnovec -mtf -mtiny=n MicroBlaze Options -msoft-float -mhard-float -msmall-divides -mcpu=cpu -mmemcpy -mxl-soft-mul -mxl-soft-div -mxl-barrel-shift -mxl-pattern-compare -mxl-stack-check -mxl-gp-opt -mno-clearbss -mxl-multiply-high -mxl-float-convert -mxl-float-sqrt -mbig-endian -mlittle-endian -mxl-reorder -mxl-mode-app- model -mpic-data-is-text-relative MIPS Options -EL -EB -march=arch -mtune=arch -mips1 -mips2 -mips3 -mips4 -mips32 -mips32r2 -mips32r3 -mips32r5 -mips32r6 -mips64 -mips64r2 -mips64r3 -mips64r5 -mips64r6 -mips16 -mno-mips16 -mflip-mips16 -minterlink-compressed -mno-interlink-compressed -minterlink-mips16 -mno-interlink-mips16 -mabi=abi -mabicalls -mno-abicalls -mshared -mno-shared -mplt -mno-plt -mxgot -mno-xgot -mgp32 -mgp64 -mfp32 -mfpxx -mfp64 -mhard-float -msoft-float -mno-float -msingle-float -mdouble-float -modd-spreg -mno-odd-spreg -mabs=mode -mnan=encoding -mdsp -mno-dsp -mdspr2 -mno-dspr2 -mmcu -mmno-mcu -meva -mno-eva -mvirt -mno-virt -mxpa -mno-xpa -mcrc -mno-crc -mginv -mno-ginv -mmicromips -mno-micromips -mmsa -mno-msa -mloongson-mmi -mno-loongson-mmi -mloongson-ext -mno-loongson-ext -mloongson-ext2 -mno-loongson-ext2 -mfpu=fpu-type -msmartmips -mno-smartmips -mpaired-single -mno-paired-single -mdmx -mno-mdmx -mips3d -mno-mips3d -mmt -mno-mt -mllsc -mno-llsc -mlong64 -mlong32 -msym32 -mno-sym32 -Gnum -mlocal-sdata -mno-local-sdata -mextern-sdata -mno-extern-sdata -mgpopt -mno-gopt -membedded-data -mno-embedded-data -muninit-const-in-rodata -mno-uninit-const-in-rodata -mcode-readable=setting -msplit-addresses -mno-split-addresses -mexplicit-relocs -mno-explicit-relocs -mcheck-zero-division -mno-check-zero-division -mdivide-traps -mdivide-breaks -mload-store-pairs -mno-load-store-pairs -mmemcpy -mno-memcpy -mlong-calls -mno-long-calls -mmad -mno-mad -mimadd -mno-imadd -mfused-madd -mno-fused-madd -nocpp -mfix-24k -mno-fix-24k -mfix-r4000 -mno-fix-r4000 -mfix-r4400 -mno-fix-r4400 -mfix-r5900 -mno-fix-r5900 -mfix-r10000 -mno-fix-r10000 -mfix-rm7000 -mno-fix-rm7000 -mfix-vr4120 -mno-fix-vr4120 -mfix-vr4130 -mno-fix-vr4130 -mfix-sb1 -mno-fix-sb1 -mflush-func=func -mno-flush-func -mbranch-cost=num -mbranch-likely -mno-branch-likely -mcompact-branches=policy -mfp-exceptions -mno-fp-exceptions -mvr4130-align -mno-vr4130-align -msynci -mno-synci -mlxc1-sxc1 -mno-lxc1-sxc1 -mmadd4 -mno-madd4 -mrelax-pic-calls -mno-relax-pic-calls -mmcount-ra-address -mframe-header-opt -mno-frame-header-opt MMIX Options -mlibfuncs -mno-libfuncs -mepsilon -mno-epsilon -mabi=gnu -mabi=mmixware -mzero-extend -mknuthdiv -mtoplevel-symbols -melf -mbranch-predict -mno-branch-predict -mbase-addresses -mno-base-addresses -msingle-exit -mno-single-exit MN10300 Options -mmult-bug -mno-mult-bug -mno-am33 -mam33 -mam33-2 -mam34 -mtune=cpu-type -mreturn-pointer-on-d0 -mno-crt0 -mrelax -mliw -msetlb Moxie Options -meb -mel -mmul.x -mno-crt0 MSP430 Options -msim -masm-hex -mmcu= -mcpu= -mlarge -msmall -mrelax -mwarn-mcu -mcode-region= -mdata-region= -msilicon-errata= -msilicon-errata-warn= -mhwmult= -minrt NDS32 Options -mbig-endian -mlittle-endian -mreduced-regs -mfull-regs -mcmov -mno-cmov -mext-perf -mno-ext-perf -mext-perf2 -mno-ext-perf2 -mext-string -mno-ext-string -mv3push -mno-v3push -m16bit -mno-16bit -misr-vector-size=num -mcache-block-size=num -march=arch -mcmodel=code-model -mctor-dtor -mrelax Nios II Options -G num -mgpopt=option -mgpopt -mno-gpopt -mgprel-sec=regexp -mr0rel-sec=regexp -mel -meb -mno-bypass-cache -mbypass-cache -mno-cache-volatile -mcache-volatile -mno-fast-sw-div -mfast-sw-div -mhw-mul -mno-hw-mul -mhw-mulx -mno-hw-mulx -mno-hw-div -mhw-div -mcustom-insn=N -mno-custom-insn -mcustom-fpu-cfg=name -mhal -msmallc -msys-crt0=name -msys-lib=name -march=arch -mbmx -mno-bmx -mcdx -mno-cdx Nvidia PTX Options -m32 -m64 -mmainkernel -moptimize OpenRISC Options -mboard=name -mnewlib -mhard-mul -mhard-div -msoft-mul -msoft-div -mcmov -mror -msext -msfimm -mshftimm PDP-11 Options -mfpu -msoft-float -mac0 -mno-ac0 -m40 -m45 -m10 -mint32 -mno-int16 -mint16 -mno-int32 -msplit -munix-asm -mdec-asm -mgnu-asm -mlra picoChip Options -mae=ae_type -mvliw-lookahead=N -msymbol-as-address -mno-inefficient-warnings PowerPC Options See RS/6000 and PowerPC Options. RISC-V Options -mbranch-cost=N-instruction -mplt -mno-plt -mabi=ABI-string -mfdiv -mno-fdiv -mdiv -mno-div -march=ISA-string -mtune=processor-string -mpreferred-stack-boundary=num -msmall-data-limit=N-bytes -msave-restore -mno-save-restore -mstrict-align -mno-strict-align -mcmodel=medlow -mcmodel=medany -mexplicit-relocs -mno-explicit-relocs -mrelax -mno-relax -mriscv-attribute -mmo-riscv-attribute RL78 Options -msim -mmul=none -mmul=g13 -mmul=g14 -mallregs -mcpu=g10 -mcpu=g13 -mcpu=g14 -mg10 -mg13 -mg14 -m64bit-doubles -m32bit-doubles -msave-mduc-in-interrupts RS/6000 and PowerPC Options -mcpu=cpu-type -mtune=cpu-type -mcmodel=code-model -mpowerpc64 -maltivec -mno-altivec -mpowerpc-gpopt -mno-powerpc-gpopt -mpowerpc-gfxopt -mno-powerpc-gfxopt -mmfcrf -mno-mfcrf -mpopcntb -mno-popcntb -mpopcntd -mno-popcntd -mfprnd -mno-fprnd -mcmpb -mno-cmpb -mmfpgpr -mno-mfpgpr -mhard-dfp -mno-hard-dfp -mfull-toc -mminimal-toc -mno-fp-in-toc -mno-sum-in-toc -m64 -m32 -mxl-compat -mno-xl-compat -mpe -malign-power -malign-natural -msoft-float -mhard-float -mmultiple -mno-multiple -mupdate -mno-update -mavoid-indexed-addresses -mno-avoid-indexed-addresses -mfused-madd -mno-fused-madd -mbit-align -mno-bit-align -mstrict-align -mno-strict-align -mrelocatable -mno-relocatable -mrelocatable-lib -mno-relocatable-lib -mtoc -mno-toc -mlittle -mlittle-endian -mbig -mbig-endian -mdynamic-no-pic -mswdiv -msingle-pic-base -mprioritize-restricted-insns=priority -msched-costly-dep=dependence_type -minsert-sched-nops=scheme -mcall-aixdesc -mcall-eabi -mcall-freebsd -mcall-linux -mcall-netbsd -mcall-openbsd -mcall-sysv -mcall-sysv-eabi -mcall-sysv-noeabi -mtraceback=traceback_type -maix-struct-return -msvr4-struct-return -mabi=abi-type -msecure-plt -mbss-plt -mlongcall -mno-longcall -mpltseq -mno-pltseq -mblock-move-inline-limit=num -mblock-compare-inline-limit=num -mblock-compare-inline-loop-limit=num -mstring-compare-inline-limit=num -misel -mno-isel -mvrsave -mno-vrsave -mmulhw -mno-mulhw -mdlmzb -mno-dlmzb -mprototype -mno-prototype -msim -mmvme -mads -myellowknife -memb -msdata -msdata=opt -mreadonly-in-sdata -mvxworks -G num -mrecip -mrecip=opt -mno-recip -mrecip-precision -mno-recip-precision -mveclibabi=type -mfriz -mno-friz -mpointers-to-nested-functions -mno-pointers-to-nested-functions -msave-toc-indirect -mno-save-toc-indirect -mpower8-fusion -mno-mpower8-fusion -mpower8-vector -mno-power8-vector -mcrypto -mno-crypto -mhtm -mno-htm -mquad-memory -mno-quad-memory -mquad-memory-atomic -mno-quad-memory-atomic -mcompat-align-parm -mno-compat-align-parm -mfloat128 -mno-float128 -mfloat128-hardware -mno-float128-hardware -mgnu-attribute -mno-gnu-attribute -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset RX Options -m64bit-doubles -m32bit-doubles -fpu -nofpu -mcpu= -mbig-endian-data -mlittle-endian-data -msmall-data -msim -mno-sim -mas100-syntax -mno-as100-syntax -mrelax -mmax-constant-size= -mint-register= -mpid -mallow-string-insns -mno-allow-string-insns -mjsr -mno-warn-multiple-fast-interrupts -msave-acc-in-interrupts S/390 and zSeries Options -mtune=cpu-type -march=cpu-type -mhard-float -msoft-float -mhard-dfp -mno-hard-dfp -mlong-double-64 -mlong-double-128 -mbackchain -mno-backchain -mpacked-stack -mno-packed-stack -msmall-exec -mno-small-exec -mmvcle -mno-mvcle -m64 -m31 -mdebug -mno-debug -mesa -mzarch -mhtm -mvx -mzvector -mtpf-trace -mno-tpf-trace -mfused-madd -mno-fused-madd -mwarn-framesize -mwarn-dynamicstack -mstack-size -mstack-guard -mhotpatch=halfwords,halfwords Score Options -meb -mel -mnhwloop -muls -mmac -mscore5 -mscore5u -mscore7 -mscore7d SH Options -m1 -m2 -m2e -m2a-nofpu -m2a-single-only -m2a-single -m2a -m3 -m3e -m4-nofpu -m4-single-only -m4-single -m4 -m4a-nofpu -m4a-single-only -m4a-single -m4a -m4al -mb -ml -mdalign -mrelax -mbigtable -mfmovd -mrenesas -mno-renesas -mnomacsave -mieee -mno-ieee -mbitops -misize -minline-ic_invalidate -mpadstruct -mprefergot -musermode -multcost=number -mdiv=strategy -mdivsi3_libfunc=name -mfixed-range=register-range -maccumulate-outgoing-args -matomic-model=atomic-model -mbranch-cost=num -mzdcbranch -mno-zdcbranch -mcbranch-force-delay-slot -mfused-madd -mno-fused-madd -mfsca -mno-fsca -mfsrra -mno-fsrra -mpretend-cmove -mtas Solaris 2 Options -mclear-hwcap -mno-clear-hwcap -mimpure-text -mno-impure-text -pthreads SPARC Options -mcpu=cpu-type -mtune=cpu-type -mcmodel=code- model -mmemory-model=mem-model -m32 -m64 -mapp-regs -mno-app-regs -mfaster-structs -mno-faster-structs -mflat -mno-flat -mfpu -mno-fpu -mhard-float -msoft-float -mhard-quad-float -msoft-quad-float -mstack-bias -mno-stack-bias -mstd-struct-return -mno-std-struct-return -munaligned-doubles -mno-unaligned-doubles -muser-mode -mno-user-mode -mv8plus -mno-v8plus -mvis -mno-vis -mvis2 -mno-vis2 -mvis3 -mno-vis3 -mvis4 -mno-vis4 -mvis4b -mno-vis4b -mcbcond -mno-cbcond -mfmaf -mno-fmaf -mfsmuld -mno-fsmuld -mpopc -mno-popc -msubxc -mno-subxc -mfix-at697f -mfix-ut699 -mfix-ut700 -mfix-gr712rc -mlra -mno-lra SPU Options -mwarn-reloc -merror-reloc -msafe-dma -munsafe-dma -mbranch-hints -msmall-mem -mlarge-mem -mstdmain -mfixed-range=register-range -mea32 -mea64 -maddress-space-conversion -mno-address-space-conversion -mcache-size=cache-size -matomic-updates -mno-atomic-updates System V Options -Qy -Qn -YP,paths -Ym,dir TILE-Gx Options -mcpu=CPU -m32 -m64 -mbig-endian -mlittle-endian -mcmodel=code-model TILEPro Options -mcpu=cpu -m32 V850 Options -mlong-calls -mno-long-calls -mep -mno-ep -mprolog-function -mno-prolog-function -mspace -mtda=n -msda=n -mzda=n -mapp-regs -mno-app-regs -mdisable-callt -mno-disable-callt -mv850e2v3 -mv850e2 -mv850e1 -mv850es -mv850e -mv850 -mv850e3v5 -mloop -mrelax -mlong-jumps -msoft-float -mhard-float -mgcc-abi -mrh850-abi -mbig-switch VAX Options -mg -mgnu -munix Visium Options -mdebug -msim -mfpu -mno-fpu -mhard-float -msoft-float -mcpu=cpu-type -mtune=cpu-type -msv-mode -muser-mode VMS Options -mvms-return-codes -mdebug-main=prefix -mmalloc64 -mpointer-size=size VxWorks Options -mrtp -non-static -Bstatic -Bdynamic -Xbind-lazy -Xbind-now x86 Options -mtune=cpu-type -march=cpu-type -mtune-ctrl=feature-list -mdump-tune-features -mno-default -mfpmath=unit -masm=dialect -mno-fancy-math-387 -mno-fp-ret-in-387 -m80387 -mhard-float -msoft-float -mno-wide-multiply -mrtd -malign-double -mpreferred-stack-boundary=num -mincoming-stack-boundary=num -mcld -mcx16 -msahf -mmovbe -mcrc32 -mrecip -mrecip=opt -mvzeroupper -mprefer-avx128 -mprefer-vector-width=opt -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4 -mavx -mavx2 -mavx512f -mavx512pf -mavx512er -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -msha -maes -mpclmul -mfsgsbase -mrdrnd -mf16c -mfma -mpconfig -mwbnoinvd -mptwrite -mprefetchwt1 -mclflushopt -mclwb -mxsavec -mxsaves -msse4a -m3dnow -m3dnowa -mpopcnt -mabm -mbmi -mtbm -mfma4 -mxop -madx -mlzcnt -mbmi2 -mfxsr -mxsave -mxsaveopt -mrtm -mhle -mlwp -mmwaitx -mclzero -mpku -mthreads -mgfni -mvaes -mwaitpkg -mshstk -mmanual-endbr -mforce-indirect-call -mavx512vbmi2 -mvpclmulqdq -mavx512bitalg -mmovdiri -mmovdir64b -mavx512vpopcntdq -mavx5124fmaps -mavx512vnni -mavx5124vnniw -mprfchw -mrdpid -mrdseed -msgx -mcldemote -mms-bitfields -mno-align-stringops -minline-all-stringops -minline-stringops-dynamically -mstringop-strategy=alg -mmemcpy-strategy=strategy -mmemset-strategy=strategy -mpush-args -maccumulate-outgoing-args -m128bit-long-double -m96bit-long-double -mlong-double-64 -mlong-double-80 -mlong-double-128 -mregparm=num -msseregparm -mveclibabi=type -mvect8-ret-in-mem -mpc32 -mpc64 -mpc80 -mstackrealign -momit-leaf-frame-pointer -mno-red-zone -mno-tls-direct-seg-refs -mcmodel=code-model -mabi=name -maddress-mode=mode -m32 -m64 -mx32 -m16 -miamcu -mlarge-data-threshold=num -msse2avx -mfentry -mrecord-mcount -mnop-mcount -m8bit-idiv -minstrument-return=type -mfentry-name=name -mfentry-section=name -mavx256-split-unaligned-load -mavx256-split-unaligned-store -malign-data=type -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset -mstack-protector-guard-symbol=symbol -mgeneral-regs-only -mcall-ms2sysv-xlogues -mindirect-branch=choice -mfunction-return=choice -mindirect-branch-register x86 Windows Options -mconsole -mcygwin -mno-cygwin -mdll -mnop-fun-dllimport -mthread -municode -mwin32 -mwindows -fno-set-stack-executable Xstormy16 Options -msim Xtensa Options -mconst16 -mno-const16 -mfused-madd -mno-fused-madd -mforce-no-pic -mserialize-volatile -mno-serialize-volatile -mtext-section-literals -mno-text-section-literals -mauto-litpools -mno-auto-litpools -mtarget-align -mno-target-align -mlongcalls -mno-longcalls zSeries Options See S/390 and zSeries Options. Options Controlling the Kind of Output Compilation can involve up to four stages: preprocessing, compilation proper, assembly and linking, always in that order. GCC is capable of preprocessing and compiling several files either into several assembler input files, or into one assembler input file; then each assembler input file produces an object file, and linking combines all the object files (those newly compiled, and those specified as input) into an executable file. For any given input file, the file name suffix determines what kind of compilation is done: file.c C source code that must be preprocessed. file.i C source code that should not be preprocessed. file.ii C++ source code that should not be preprocessed. file.m Objective-C source code. Note that you must link with the libobjc library to make an Objective-C program work. file.mi Objective-C source code that should not be preprocessed. file.mm file.M Objective-C++ source code. Note that you must link with the libobjc library to make an Objective-C++ program work. Note that .M refers to a literal capital M. file.mii Objective-C++ source code that should not be preprocessed. file.h C, C++, Objective-C or Objective-C++ header file to be turned into a precompiled header (default), or C, C++ header file to be turned into an Ada spec (via the -fdump-ada-spec switch). file.cc file.cp file.cxx file.cpp file.CPP file.c++ file.C C++ source code that must be preprocessed. Note that in .cxx, the last two letters must both be literally x. Likewise, .C refers to a literal capital C. file.mm file.M Objective-C++ source code that must be preprocessed. file.mii Objective-C++ source code that should not be preprocessed. file.hh file.H file.hp file.hxx file.hpp file.HPP file.h++ file.tcc C++ header file to be turned into a precompiled header or Ada spec. file.f file.for file.ftn Fixed form Fortran source code that should not be preprocessed. file.F file.FOR file.fpp file.FPP file.FTN Fixed form Fortran source code that must be preprocessed (with the traditional preprocessor). file.f90 file.f95 file.f03 file.f08 Free form Fortran source code that should not be preprocessed. file.F90 file.F95 file.F03 file.F08 Free form Fortran source code that must be preprocessed (with the traditional preprocessor). file.go Go source code. file.brig BRIG files (binary representation of HSAIL). file.d D source code. file.di D interface file. file.dd D documentation code (Ddoc). file.ads Ada source code file that contains a library unit declaration (a declaration of a package, subprogram, or generic, or a generic instantiation), or a library unit renaming declaration (a package, generic, or subprogram renaming declaration). Such files are also called specs. file.adb Ada source code file containing a library unit body (a subprogram or package body). Such files are also called bodies. file.s Assembler code. file.S file.sx Assembler code that must be preprocessed. other An object file to be fed straight into linking. Any file name with no recognized suffix is treated this way. You can specify the input language explicitly with the -x option: -x language Specify explicitly the language for the following input files (rather than letting the compiler choose a default based on the file name suffix). This option applies to all following input files until the next -x option. Possible values for language are: c c-header cpp-output c++ c++-header c++-cpp-output objective-c objective-c-header objective-c-cpp-output objective-c++ objective-c++-header objective-c++-cpp-output assembler assembler-with-cpp ada d f77 f77-cpp-input f95 f95-cpp-input go brig -x none Turn off any specification of a language, so that subsequent files are handled according to their file name suffixes (as they are if -x has not been used at all). If you only want some of the stages of compilation, you can use -x (or filename suffixes) to tell gcc where to start, and one of the options -c, -S, or -E to say where gcc is to stop. Note that some combinations (for example, -x cpp-output -E) instruct gcc to do nothing at all. -c Compile or assemble the source files, but do not link. The linking stage simply is not done. The ultimate output is in the form of an object file for each source file. By default, the object file name for a source file is made by replacing the suffix .c, .i, .s, etc., with .o. Unrecognized input files, not requiring compilation or assembly, are ignored. -S Stop after the stage of compilation proper; do not assemble. The output is in the form of an assembler code file for each non-assembler input file specified. By default, the assembler file name for a source file is made by replacing the suffix .c, .i, etc., with .s. Input files that don't require compilation are ignored. -E Stop after the preprocessing stage; do not run the compiler proper. The output is in the form of preprocessed source code, which is sent to the standard output. Input files that don't require preprocessing are ignored. -o file Place output in file file. This applies to whatever sort of output is being produced, whether it be an executable file, an object file, an assembler file or preprocessed C code. If -o is not specified, the default is to put an executable file in a.out, the object file for source.suffix in source.o, its assembler file in source.s, a precompiled header file in source.suffix.gch, and all preprocessed C source on standard output. -v Print (on standard error output) the commands executed to run the stages of compilation. Also print the version number of the compiler driver program and of the preprocessor and the compiler proper. -### Like -v except the commands are not executed and arguments are quoted unless they contain only alphanumeric characters or "./-_". This is useful for shell scripts to capture the driver-generated command lines. --help Print (on the standard output) a description of the command- line options understood by gcc. If the -v option is also specified then --help is also passed on to the various processes invoked by gcc, so that they can display the command-line options they accept. If the -Wextra option has also been specified (prior to the --help option), then command-line options that have no documentation associated with them are also displayed. --target-help Print (on the standard output) a description of target- specific command-line options for each tool. For some targets extra target-specific information may also be printed. --help={class|[^]qualifier}[,...] Print (on the standard output) a description of the command- line options understood by the compiler that fit into all specified classes and qualifiers. These are the supported classes: optimizers Display all of the optimization options supported by the compiler. warnings Display all of the options controlling warning messages produced by the compiler. target Display target-specific options. Unlike the --target-help option however, target-specific options of the linker and assembler are not displayed. This is because those tools do not currently support the extended --help= syntax. params Display the values recognized by the --param option. language Display the options supported for language, where language is the name of one of the languages supported in this version of GCC. common Display the options that are common to all languages. These are the supported qualifiers: undocumented Display only those options that are undocumented. joined Display options taking an argument that appears after an equal sign in the same continuous piece of text, such as: --help=target. separate Display options taking an argument that appears as a separate word following the original option, such as: -o output-file. Thus for example to display all the undocumented target- specific switches supported by the compiler, use: --help=target,undocumented The sense of a qualifier can be inverted by prefixing it with the ^ character, so for example to display all binary warning options (i.e., ones that are either on or off and that do not take an argument) that have a description, use: --help=warnings,^joined,^undocumented The argument to --help= should not consist solely of inverted qualifiers. Combining several classes is possible, although this usually restricts the output so much that there is nothing to display. One case where it does work, however, is when one of the classes is target. For example, to display all the target-specific optimization options, use: --help=target,optimizers The --help= option can be repeated on the command line. Each successive use displays its requested class of options, skipping those that have already been displayed. If --help is also specified anywhere on the command line then this takes precedence over any --help= option. If the -Q option appears on the command line before the --help= option, then the descriptive text displayed by --help= is changed. Instead of describing the displayed options, an indication is given as to whether the option is enabled, disabled or set to a specific value (assuming that the compiler knows this at the point where the --help= option is used). Here is a truncated example from the ARM port of gcc: % gcc -Q -mabi=2 --help=target -c The following options are target specific: -mabi= 2 -mabort-on-noreturn [disabled] -mapcs [disabled] The output is sensitive to the effects of previous command- line options, so for example it is possible to find out which optimizations are enabled at -O2 by using: -Q -O2 --help=optimizers Alternatively you can discover which binary optimizations are enabled by -O3 by using: gcc -c -Q -O3 --help=optimizers > /tmp/O3-opts gcc -c -Q -O2 --help=optimizers > /tmp/O2-opts diff /tmp/O2-opts /tmp/O3-opts | grep enabled --version Display the version number and copyrights of the invoked GCC. -pass-exit-codes Normally the gcc program exits with the code of 1 if any phase of the compiler returns a non-success return code. If you specify -pass-exit-codes, the gcc program instead returns with the numerically highest error produced by any phase returning an error indication. The C, C++, and Fortran front ends return 4 if an internal compiler error is encountered. -pipe Use pipes rather than temporary files for communication between the various stages of compilation. This fails to work on some systems where the assembler is unable to read from a pipe; but the GNU assembler has no trouble. -specs=file Process file after the compiler reads in the standard specs file, in order to override the defaults which the gcc driver program uses when determining what switches to pass to cc1, cc1plus, as, ld, etc. More than one -specs=file can be specified on the command line, and they are processed in order, from left to right. -wrapper Invoke all subcommands under a wrapper program. The name of the wrapper program and its parameters are passed as a comma separated list. gcc -c t.c -wrapper gdb,--args This invokes all subprograms of gcc under gdb --args, thus the invocation of cc1 is gdb --args cc1 .... -ffile-prefix-map=old=new When compiling files residing in directory old, record any references to them in the result of the compilation as if the files resided in directory new instead. Specifying this option is equivalent to specifying all the individual -f*-prefix-map options. This can be used to make reproducible builds that are location independent. See also -fmacro-prefix-map and -fdebug-prefix-map. -fplugin=name.so Load the plugin code in file name.so, assumed to be a shared object to be dlopen'd by the compiler. The base name of the shared object file is used to identify the plugin for the purposes of argument parsing (See -fplugin-arg-name-key=value below). Each plugin should define the callback functions specified in the Plugins API. -fplugin-arg-name-key=value Define an argument called key with a value of value for the plugin called name. -fdump-ada-spec[-slim] For C and C++ source and include files, generate corresponding Ada specs. -fada-spec-parent=unit In conjunction with -fdump-ada-spec[-slim] above, generate Ada specs as child units of parent unit. -fdump-go-spec=file For input files in any language, generate corresponding Go declarations in file. This generates Go "const", "type", "var", and "func" declarations which may be a useful way to start writing a Go interface to code written in some other language. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. Compiling C++ Programs C++ source files conventionally use one of the suffixes .C, .cc, .cpp, .CPP, .c++, .cp, or .cxx; C++ header files often use .hh, .hpp, .H, or (for shared template code) .tcc; and preprocessed C++ files use the suffix .ii. GCC recognizes files with these names and compiles them as C++ programs even if you call the compiler the same way as for compiling C programs (usually with the name gcc). However, the use of gcc does not add the C++ library. g++ is a program that calls GCC and automatically specifies linking against the C++ library. It treats .c, .h and .i files as C++ source files instead of C source files unless -x is used. This program is also useful when precompiling a C header file with a .h extension for use in C++ compilations. On many systems, g++ is also installed with the name c++. When you compile C++ programs, you may specify many of the same command-line options that you use for compiling programs in any language; or command-line options meaningful for C and related languages; or options that are meaningful only for C++ programs. Options Controlling C Dialect The following options control the dialect of C (or languages derived from C, such as C++, Objective-C and Objective-C++) that the compiler accepts: -ansi In C mode, this is equivalent to -std=c90. In C++ mode, it is equivalent to -std=c++98. This turns off certain features of GCC that are incompatible with ISO C90 (when compiling C code), or of standard C++ (when compiling C++ code), such as the "asm" and "typeof" keywords, and predefined macros such as "unix" and "vax" that identify the type of system you are using. It also enables the undesirable and rarely used ISO trigraph feature. For the C compiler, it disables recognition of C++ style // comments as well as the "inline" keyword. The alternate keywords "__asm__", "__extension__", "__inline__" and "__typeof__" continue to work despite -ansi. You would not want to use them in an ISO C program, of course, but it is useful to put them in header files that might be included in compilations done with -ansi. Alternate predefined macros such as "__unix__" and "__vax__" are also available, with or without -ansi. The -ansi option does not cause non-ISO programs to be rejected gratuitously. For that, -Wpedantic is required in addition to -ansi. The macro "__STRICT_ANSI__" is predefined when the -ansi option is used. Some header files may notice this macro and refrain from declaring certain functions or defining certain macros that the ISO standard doesn't call for; this is to avoid interfering with any programs that might use these names for other things. Functions that are normally built in but do not have semantics defined by ISO C (such as "alloca" and "ffs") are not built-in functions when -ansi is used. -std= Determine the language standard. This option is currently only supported when compiling C or C++. The compiler can accept several base standards, such as c90 or c++98, and GNU dialects of those standards, such as gnu90 or gnu++98. When a base standard is specified, the compiler accepts all programs following that standard plus those using GNU extensions that do not contradict it. For example, -std=c90 turns off certain features of GCC that are incompatible with ISO C90, such as the "asm" and "typeof" keywords, but not other GNU extensions that do not have a meaning in ISO C90, such as omitting the middle term of a "?:" expression. On the other hand, when a GNU dialect of a standard is specified, all features supported by the compiler are enabled, even when those features change the meaning of the base standard. As a result, some strict-conforming programs may be rejected. The particular standard is used by -Wpedantic to identify which features are GNU extensions given that version of the standard. For example -std=gnu90 -Wpedantic warns about C++ style // comments, while -std=gnu99 -Wpedantic does not. A value for this option must be provided; possible values are c90 c89 iso9899:1990 Support all ISO C90 programs (certain GNU extensions that conflict with ISO C90 are disabled). Same as -ansi for C code. iso9899:199409 ISO C90 as modified in amendment 1. c99 c9x iso9899:1999 iso9899:199x ISO C99. This standard is substantially completely supported, modulo bugs and floating-point issues (mainly but not entirely relating to optional C99 features from Annexes F and G). See <http://gcc.gnu.org/c99status.html > for more information. The names c9x and iso9899:199x are deprecated. c11 c1x iso9899:2011 ISO C11, the 2011 revision of the ISO C standard. This standard is substantially completely supported, modulo bugs, floating-point issues (mainly but not entirely relating to optional C11 features from Annexes F and G) and the optional Annexes K (Bounds-checking interfaces) and L (Analyzability). The name c1x is deprecated. c17 c18 iso9899:2017 iso9899:2018 ISO C17, the 2017 revision of the ISO C standard (published in 2018). This standard is same as C11 except for corrections of defects (all of which are also applied with -std=c11) and a new value of "__STDC_VERSION__", and so is supported to the same extent as C11. c2x The next version of the ISO C standard, still under development. The support for this version is experimental and incomplete. gnu90 gnu89 GNU dialect of ISO C90 (including some C99 features). gnu99 gnu9x GNU dialect of ISO C99. The name gnu9x is deprecated. gnu11 gnu1x GNU dialect of ISO C11. The name gnu1x is deprecated. gnu17 gnu18 GNU dialect of ISO C17. This is the default for C code. gnu2x The next version of the ISO C standard, still under development, plus GNU extensions. The support for this version is experimental and incomplete. c++98 c++03 The 1998 ISO C++ standard plus the 2003 technical corrigendum and some additional defect reports. Same as -ansi for C++ code. gnu++98 gnu++03 GNU dialect of -std=c++98. c++11 c++0x The 2011 ISO C++ standard plus amendments. The name c++0x is deprecated. gnu++11 gnu++0x GNU dialect of -std=c++11. The name gnu++0x is deprecated. c++14 c++1y The 2014 ISO C++ standard plus amendments. The name c++1y is deprecated. gnu++14 gnu++1y GNU dialect of -std=c++14. This is the default for C++ code. The name gnu++1y is deprecated. c++17 c++1z The 2017 ISO C++ standard plus amendments. The name c++1z is deprecated. gnu++17 gnu++1z GNU dialect of -std=c++17. The name gnu++1z is deprecated. c++2a The next revision of the ISO C++ standard, tentatively planned for 2020. Support is highly experimental, and will almost certainly change in incompatible ways in future releases. gnu++2a GNU dialect of -std=c++2a. Support is highly experimental, and will almost certainly change in incompatible ways in future releases. -fgnu89-inline The option -fgnu89-inline tells GCC to use the traditional GNU semantics for "inline" functions when in C99 mode. Using this option is roughly equivalent to adding the "gnu_inline" function attribute to all inline functions. The option -fno-gnu89-inline explicitly tells GCC to use the C99 semantics for "inline" when in C99 or gnu99 mode (i.e., it specifies the default behavior). This option is not supported in -std=c90 or -std=gnu90 mode. The preprocessor macros "__GNUC_GNU_INLINE__" and "__GNUC_STDC_INLINE__" may be used to check which semantics are in effect for "inline" functions. -fpermitted-flt-eval-methods=style ISO/IEC TS 18661-3 defines new permissible values for "FLT_EVAL_METHOD" that indicate that operations and constants with a semantic type that is an interchange or extended format should be evaluated to the precision and range of that type. These new values are a superset of those permitted under C99/C11, which does not specify the meaning of other positive values of "FLT_EVAL_METHOD". As such, code conforming to C11 may not have been written expecting the possibility of the new values. -fpermitted-flt-eval-methods specifies whether the compiler should allow only the values of "FLT_EVAL_METHOD" specified in C99/C11, or the extended set of values specified in ISO/IEC TS 18661-3. style is either "c11" or "ts-18661-3" as appropriate. The default when in a standards compliant mode (-std=c11 or similar) is -fpermitted-flt-eval-methods=c11. The default when in a GNU dialect (-std=gnu11 or similar) is -fpermitted-flt-eval-methods=ts-18661-3. -aux-info filename Output to the given filename prototyped declarations for all functions declared and/or defined in a translation unit, including those in header files. This option is silently ignored in any language other than C. Besides declarations, the file indicates, in comments, the origin of each declaration (source file and line), whether the declaration was implicit, prototyped or unprototyped (I, N for new or O for old, respectively, in the first character after the line number and the colon), and whether it came from a declaration or a definition (C or F, respectively, in the following character). In the case of function definitions, a K&R-style list of arguments followed by their declarations is also provided, inside comments, after the declaration. -fallow-parameterless-variadic-functions Accept variadic functions without named parameters. Although it is possible to define such a function, this is not very useful as it is not possible to read the arguments. This is only supported for C as this construct is allowed by C++. -fno-asm Do not recognize "asm", "inline" or "typeof" as a keyword, so that code can use these words as identifiers. You can use the keywords "__asm__", "__inline__" and "__typeof__" instead. -ansi implies -fno-asm. In C++, this switch only affects the "typeof" keyword, since "asm" and "inline" are standard keywords. You may want to use the -fno-gnu-keywords flag instead, which has the same effect. In C99 mode (-std=c99 or -std=gnu99), this switch only affects the "asm" and "typeof" keywords, since "inline" is a standard keyword in ISO C99. -fno-builtin -fno-builtin-function Don't recognize built-in functions that do not begin with __builtin_ as prefix. GCC normally generates special code to handle certain built- in functions more efficiently; for instance, calls to "alloca" may become single instructions which adjust the stack directly, and calls to "memcpy" may become inline copy loops. The resulting code is often both smaller and faster, but since the function calls no longer appear as such, you cannot set a breakpoint on those calls, nor can you change the behavior of the functions by linking with a different library. In addition, when a function is recognized as a built-in function, GCC may use information about that function to warn about problems with calls to that function, or to generate more efficient code, even if the resulting code still contains calls to that function. For example, warnings are given with -Wformat for bad calls to "printf" when "printf" is built in and "strlen" is known not to modify global memory. With the -fno-builtin-function option only the built-in function function is disabled. function must not begin with __builtin_. If a function is named that is not built-in in this version of GCC, this option is ignored. There is no corresponding -fbuiltin-function option; if you wish to enable built-in functions selectively when using -fno-builtin or -ffreestanding, you may define macros such as: #define abs(n) __builtin_abs ((n)) #define strcpy(d, s) __builtin_strcpy ((d), (s)) -fgimple Enable parsing of function definitions marked with "__GIMPLE". This is an experimental feature that allows unit testing of GIMPLE passes. -fhosted Assert that compilation targets a hosted environment. This implies -fbuiltin. A hosted environment is one in which the entire standard library is available, and in which "main" has a return type of "int". Examples are nearly everything except a kernel. This is equivalent to -fno-freestanding. -ffreestanding Assert that compilation targets a freestanding environment. This implies -fno-builtin. A freestanding environment is one in which the standard library may not exist, and program startup may not necessarily be at "main". The most obvious example is an OS kernel. This is equivalent to -fno-hosted. -fopenacc Enable handling of OpenACC directives "#pragma acc" in C/C++ and "!$acc" in Fortran. When -fopenacc is specified, the compiler generates accelerated code according to the OpenACC Application Programming Interface v2.0 <https://www.openacc.org >. This option implies -pthread, and thus is only supported on targets that have support for -pthread. -fopenacc-dim=geom Specify default compute dimensions for parallel offload regions that do not explicitly specify. The geom value is a triple of ':'-separated sizes, in order 'gang', 'worker' and, 'vector'. A size can be omitted, to use a target-specific default value. -fopenmp Enable handling of OpenMP directives "#pragma omp" in C/C++ and "!$omp" in Fortran. When -fopenmp is specified, the compiler generates parallel code according to the OpenMP Application Program Interface v4.5 <https://www.openmp.org >. This option implies -pthread, and thus is only supported on targets that have support for -pthread. -fopenmp implies -fopenmp-simd. -fopenmp-simd Enable handling of OpenMP's SIMD directives with "#pragma omp" in C/C++ and "!$omp" in Fortran. Other OpenMP directives are ignored. -fgnu-tm When the option -fgnu-tm is specified, the compiler generates code for the Linux variant of Intel's current Transactional Memory ABI specification document (Revision 1.1, May 6 2009). This is an experimental feature whose interface may change in future versions of GCC, as the official specification changes. Please note that not all architectures are supported for this feature. For more information on GCC's support for transactional memory, Note that the transactional memory feature is not supported with non-call exceptions (-fnon-call-exceptions). -fms-extensions Accept some non-standard constructs used in Microsoft header files. In C++ code, this allows member names in structures to be similar to previous types declarations. typedef int UOW; struct ABC { UOW UOW; }; Some cases of unnamed fields in structures and unions are only accepted with this option. Note that this option is off for all targets but x86 targets using ms-abi. -fplan9-extensions Accept some non-standard constructs used in Plan 9 code. This enables -fms-extensions, permits passing pointers to structures with anonymous fields to functions that expect pointers to elements of the type of the field, and permits referring to anonymous fields declared using a typedef. This is only supported for C, not C++. -fcond-mismatch Allow conditional expressions with mismatched types in the second and third arguments. The value of such an expression is void. This option is not supported for C++. -flax-vector-conversions Allow implicit conversions between vectors with differing numbers of elements and/or incompatible element types. This option should not be used for new code. -funsigned-char Let the type "char" be unsigned, like "unsigned char". Each kind of machine has a default for what "char" should be. It is either like "unsigned char" by default or like "signed char" by default. Ideally, a portable program should always use "signed char" or "unsigned char" when it depends on the signedness of an object. But many programs have been written to use plain "char" and expect it to be signed, or expect it to be unsigned, depending on the machines they were written for. This option, and its inverse, let you make such a program work with the opposite default. The type "char" is always a distinct type from each of "signed char" or "unsigned char", even though its behavior is always just like one of those two. -fsigned-char Let the type "char" be signed, like "signed char". Note that this is equivalent to -fno-unsigned-char, which is the negative form of -funsigned-char. Likewise, the option -fno-signed-char is equivalent to -funsigned-char. -fsigned-bitfields -funsigned-bitfields -fno-signed-bitfields -fno-unsigned-bitfields These options control whether a bit-field is signed or unsigned, when the declaration does not use either "signed" or "unsigned". By default, such a bit-field is signed, because this is consistent: the basic integer types such as "int" are signed types. -fsso-struct=endianness Set the default scalar storage order of structures and unions to the specified endianness. The accepted values are big- endian, little-endian and native for the native endianness of the target (the default). This option is not supported for C++. Warning: the -fsso-struct switch causes GCC to generate code that is not binary compatible with code generated without it if the specified endianness is not the native endianness of the target. Options Controlling C++ Dialect This section describes the command-line options that are only meaningful for C++ programs. You can also use most of the GNU compiler options regardless of what language your program is in. For example, you might compile a file firstClass.C like this: g++ -g -fstrict-enums -O -c firstClass.C In this example, only -fstrict-enums is an option meant only for C++ programs; you can use the other options with any language supported by GCC. Some options for compiling C programs, such as -std, are also relevant for C++ programs. Here is a list of options that are only for compiling C++ programs: -fabi-version=n Use version n of the C++ ABI. The default is version 0. Version 0 refers to the version conforming most closely to the C++ ABI specification. Therefore, the ABI obtained using version 0 will change in different versions of G++ as ABI bugs are fixed. Version 1 is the version of the C++ ABI that first appeared in G++ 3.2. Version 2 is the version of the C++ ABI that first appeared in G++ 3.4, and was the default through G++ 4.9. Version 3 corrects an error in mangling a constant address as a template argument. Version 4, which first appeared in G++ 4.5, implements a standard mangling for vector types. Version 5, which first appeared in G++ 4.6, corrects the mangling of attribute const/volatile on function pointer types, decltype of a plain decl, and use of a function parameter in the declaration of another parameter. Version 6, which first appeared in G++ 4.7, corrects the promotion behavior of C++11 scoped enums and the mangling of template argument packs, const/static_cast, prefix ++ and --, and a class scope function used as a template argument. Version 7, which first appeared in G++ 4.8, that treats nullptr_t as a builtin type and corrects the mangling of lambdas in default argument scope. Version 8, which first appeared in G++ 4.9, corrects the substitution behavior of function types with function-cv- qualifiers. Version 9, which first appeared in G++ 5.2, corrects the alignment of "nullptr_t". Version 10, which first appeared in G++ 6.1, adds mangling of attributes that affect type identity, such as ia32 calling convention attributes (e.g. stdcall). Version 11, which first appeared in G++ 7, corrects the mangling of sizeof... expressions and operator names. For multiple entities with the same name within a function, that are declared in different scopes, the mangling now changes starting with the twelfth occurrence. It also implies -fnew-inheriting-ctors. Version 12, which first appeared in G++ 8, corrects the calling conventions for empty classes on the x86_64 target and for classes with only deleted copy/move constructors. It accidentally changes the calling convention for classes with a deleted copy constructor and a trivial move constructor. Version 13, which first appeared in G++ 8.2, fixes the accidental change in version 12. See also -Wabi. -fabi-compat-version=n On targets that support strong aliases, G++ works around mangling changes by creating an alias with the correct mangled name when defining a symbol with an incorrect mangled name. This switch specifies which ABI version to use for the alias. With -fabi-version=0 (the default), this defaults to 11 (GCC 7 compatibility). If another ABI version is explicitly selected, this defaults to 0. For compatibility with GCC versions 3.2 through 4.9, use -fabi-compat-version=2. If this option is not provided but -Wabi=n is, that version is used for compatibility aliases. If this option is provided along with -Wabi (without the version), the version from this option is used for the warning. -fno-access-control Turn off all access checking. This switch is mainly useful for working around bugs in the access control code. -faligned-new Enable support for C++17 "new" of types that require more alignment than "void* ::operator new(std::size_t)" provides. A numeric argument such as "-faligned-new=32" can be used to specify how much alignment (in bytes) is provided by that function, but few users will need to override the default of "alignof(std::max_align_t)". This flag is enabled by default for -std=c++17. -fchar8_t -fno-char8_t Enable support for "char8_t" as adopted for C++2a. This includes the addition of a new "char8_t" fundamental type, changes to the types of UTF-8 string and character literals, new signatures for user-defined literals, associated standard library updates, and new "__cpp_char8_t" and "__cpp_lib_char8_t" feature test macros. This option enables functions to be overloaded for ordinary and UTF-8 strings: int f(const char *); // #1 int f(const char8_t *); // #2 int v1 = f("text"); // Calls #1 int v2 = f(u8"text"); // Calls #2 and introduces new signatures for user-defined literals: int operator""_udl1(char8_t); int v3 = u8'x'_udl1; int operator""_udl2(const char8_t*, std::size_t); int v4 = u8"text"_udl2; template<typename T, T...> int operator""_udl3(); int v5 = u8"text"_udl3; The change to the types of UTF-8 string and character literals introduces incompatibilities with ISO C++11 and later standards. For example, the following code is well- formed under ISO C++11, but is ill-formed when -fchar8_t is specified. char ca[] = u8"xx"; // error: char-array initialized from wide // string const char *cp = u8"xx";// error: invalid conversion from // `const char8_t*' to `const char*' int f(const char*); auto v = f(u8"xx"); // error: invalid conversion from // `const char8_t*' to `const char*' std::string s{u8"xx"}; // error: no matching function for call to // `std::basic_string<char>::basic_string()' using namespace std::literals; s = u8"xx"s; // error: conversion from // `basic_string<char8_t>' to non-scalar // type `basic_string<char>' requested -fcheck-new Check that the pointer returned by "operator new" is non-null before attempting to modify the storage allocated. This check is normally unnecessary because the C++ standard specifies that "operator new" only returns 0 if it is declared "throw()", in which case the compiler always checks the return value even without this option. In all other cases, when "operator new" has a non-empty exception specification, memory exhaustion is signalled by throwing "std::bad_alloc". See also new (nothrow). -fconcepts Enable support for the C++ Extensions for Concepts Technical Specification, ISO 19217 (2015), which allows code like template <class T> concept bool Addable = requires (T t) { t + t; }; template <Addable T> T add (T a, T b) { return a + b; } -fconstexpr-depth=n Set the maximum nested evaluation depth for C++11 constexpr functions to n. A limit is needed to detect endless recursion during constant expression evaluation. The minimum specified by the standard is 512. -fconstexpr-loop-limit=n Set the maximum number of iterations for a loop in C++14 constexpr functions to n. A limit is needed to detect infinite loops during constant expression evaluation. The default is 262144 (1<<18). -fconstexpr-ops-limit=n Set the maximum number of operations during a single constexpr evaluation. Even when number of iterations of a single loop is limited with the above limit, if there are several nested loops and each of them has many iterations but still smaller than the above limit, or if in a body of some loop or even outside of a loop too many expressions need to be evaluated, the resulting constexpr evaluation might take too long. The default is 33554432 (1<<25). -fdeduce-init-list Enable deduction of a template type parameter as "std::initializer_list" from a brace-enclosed initializer list, i.e. template <class T> auto forward(T t) -> decltype (realfn (t)) { return realfn (t); } void f() { forward({1,2}); // call forward<std::initializer_list<int>> } This deduction was implemented as a possible extension to the originally proposed semantics for the C++11 standard, but was not part of the final standard, so it is disabled by default. This option is deprecated, and may be removed in a future version of G++. -fno-elide-constructors The C++ standard allows an implementation to omit creating a temporary that is only used to initialize another object of the same type. Specifying this option disables that optimization, and forces G++ to call the copy constructor in all cases. This option also causes G++ to call trivial member functions which otherwise would be expanded inline. In C++17, the compiler is required to omit these temporaries, but this option still affects trivial member functions. -fno-enforce-eh-specs Don't generate code to check for violation of exception specifications at run time. This option violates the C++ standard, but may be useful for reducing code size in production builds, much like defining "NDEBUG". This does not give user code permission to throw exceptions in violation of the exception specifications; the compiler still optimizes based on the specifications, so throwing an unexpected exception results in undefined behavior at run time. -fextern-tls-init -fno-extern-tls-init The C++11 and OpenMP standards allow "thread_local" and "threadprivate" variables to have dynamic (runtime) initialization. To support this, any use of such a variable goes through a wrapper function that performs any necessary initialization. When the use and definition of the variable are in the same translation unit, this overhead can be optimized away, but when the use is in a different translation unit there is significant overhead even if the variable doesn't actually need dynamic initialization. If the programmer can be sure that no use of the variable in a non-defining TU needs to trigger dynamic initialization (either because the variable is statically initialized, or a use of the variable in the defining TU will be executed before any uses in another TU), they can avoid this overhead with the -fno-extern-tls-init option. On targets that support symbol aliases, the default is -fextern-tls-init. On targets that do not support symbol aliases, the default is -fno-extern-tls-init. -fno-gnu-keywords Do not recognize "typeof" as a keyword, so that code can use this word as an identifier. You can use the keyword "__typeof__" instead. This option is implied by the strict ISO C++ dialects: -ansi, -std=c++98, -std=c++11, etc. -fno-implicit-templates Never emit code for non-inline templates that are instantiated implicitly (i.e. by use); only emit code for explicit instantiations. If you use this option, you must take care to structure your code to include all the necessary explicit instantiations to avoid getting undefined symbols at link time. -fno-implicit-inline-templates Don't emit code for implicit instantiations of inline templates, either. The default is to handle inlines differently so that compiles with and without optimization need the same set of explicit instantiations. -fno-implement-inlines To save space, do not emit out-of-line copies of inline functions controlled by "#pragma implementation". This causes linker errors if these functions are not inlined everywhere they are called. -fms-extensions Disable Wpedantic warnings about constructs used in MFC, such as implicit int and getting a pointer to member function via non-standard syntax. -fnew-inheriting-ctors Enable the P0136 adjustment to the semantics of C++11 constructor inheritance. This is part of C++17 but also considered to be a Defect Report against C++11 and C++14. This flag is enabled by default unless -fabi-version=10 or lower is specified. -fnew-ttp-matching Enable the P0522 resolution to Core issue 150, template template parameters and default arguments: this allows a template with default template arguments as an argument for a template template parameter with fewer template parameters. This flag is enabled by default for -std=c++17. -fno-nonansi-builtins Disable built-in declarations of functions that are not mandated by ANSI/ISO C. These include "ffs", "alloca", "_exit", "index", "bzero", "conjf", and other related functions. -fnothrow-opt Treat a "throw()" exception specification as if it were a "noexcept" specification to reduce or eliminate the text size overhead relative to a function with no exception specification. If the function has local variables of types with non-trivial destructors, the exception specification actually makes the function smaller because the EH cleanups for those variables can be optimized away. The semantic effect is that an exception thrown out of a function with such an exception specification results in a call to "terminate" rather than "unexpected". -fno-operator-names Do not treat the operator name keywords "and", "bitand", "bitor", "compl", "not", "or" and "xor" as synonyms as keywords. -fno-optional-diags Disable diagnostics that the standard says a compiler does not need to issue. Currently, the only such diagnostic issued by G++ is the one for a name having multiple meanings within a class. -fpermissive Downgrade some diagnostics about nonconformant code from errors to warnings. Thus, using -fpermissive allows some nonconforming code to compile. -fno-pretty-templates When an error message refers to a specialization of a function template, the compiler normally prints the signature of the template followed by the template arguments and any typedefs or typenames in the signature (e.g. "void f(T) [with T = int]" rather than "void f(int)") so that it's clear which template is involved. When an error message refers to a specialization of a class template, the compiler omits any template arguments that match the default template arguments for that template. If either of these behaviors make it harder to understand the error message rather than easier, you can use -fno-pretty-templates to disable them. -frepo Enable automatic template instantiation at link time. This option also implies -fno-implicit-templates. -fno-rtti Disable generation of information about every class with virtual functions for use by the C++ run-time type identification features ("dynamic_cast" and "typeid"). If you don't use those parts of the language, you can save some space by using this flag. Note that exception handling uses the same information, but G++ generates it as needed. The "dynamic_cast" operator can still be used for casts that do not require run-time type information, i.e. casts to "void *" or to unambiguous base classes. Mixing code compiled with -frtti with that compiled with -fno-rtti may not work. For example, programs may fail to link if a class compiled with -fno-rtti is used as a base for a class compiled with -frtti. -fsized-deallocation Enable the built-in global declarations void operator delete (void *, std::size_t) noexcept; void operator delete[] (void *, std::size_t) noexcept; as introduced in C++14. This is useful for user-defined replacement deallocation functions that, for example, use the size of the object to make deallocation faster. Enabled by default under -std=c++14 and above. The flag -Wsized-deallocation warns about places that might want to add a definition. -fstrict-enums Allow the compiler to optimize using the assumption that a value of enumerated type can only be one of the values of the enumeration (as defined in the C++ standard; basically, a value that can be represented in the minimum number of bits needed to represent all the enumerators). This assumption may not be valid if the program uses a cast to convert an arbitrary integer value to the enumerated type. -fstrong-eval-order Evaluate member access, array subscripting, and shift expressions in left-to-right order, and evaluate assignment in right-to-left order, as adopted for C++17. Enabled by default with -std=c++17. -fstrong-eval-order=some enables just the ordering of member access and shift expressions, and is the default without -std=c++17. -ftemplate-backtrace-limit=n Set the maximum number of template instantiation notes for a single warning or error to n. The default value is 10. -ftemplate-depth=n Set the maximum instantiation depth for template classes to n. A limit on the template instantiation depth is needed to detect endless recursions during template class instantiation. ANSI/ISO C++ conforming programs must not rely on a maximum depth greater than 17 (changed to 1024 in C++11). The default value is 900, as the compiler can run out of stack space before hitting 1024 in some situations. -fno-threadsafe-statics Do not emit the extra code to use the routines specified in the C++ ABI for thread-safe initialization of local statics. You can use this option to reduce code size slightly in code that doesn't need to be thread-safe. -fuse-cxa-atexit Register destructors for objects with static storage duration with the "__cxa_atexit" function rather than the "atexit" function. This option is required for fully standards- compliant handling of static destructors, but only works if your C library supports "__cxa_atexit". -fno-use-cxa-get-exception-ptr Don't use the "__cxa_get_exception_ptr" runtime routine. This causes "std::uncaught_exception" to be incorrect, but is necessary if the runtime routine is not available. -fvisibility-inlines-hidden This switch declares that the user does not attempt to compare pointers to inline functions or methods where the addresses of the two functions are taken in different shared objects. The effect of this is that GCC may, effectively, mark inline methods with "__attribute__ ((visibility ("hidden")))" so that they do not appear in the export table of a DSO and do not require a PLT indirection when used within the DSO. Enabling this option can have a dramatic effect on load and link times of a DSO as it massively reduces the size of the dynamic export table when the library makes heavy use of templates. The behavior of this switch is not quite the same as marking the methods as hidden directly, because it does not affect static variables local to the function or cause the compiler to deduce that the function is defined in only one shared object. You may mark a method as having a visibility explicitly to negate the effect of the switch for that method. For example, if you do want to compare pointers to a particular inline method, you might mark it as having default visibility. Marking the enclosing class with explicit visibility has no effect. Explicitly instantiated inline methods are unaffected by this option as their linkage might otherwise cross a shared library boundary. -fvisibility-ms-compat This flag attempts to use visibility settings to make GCC's C++ linkage model compatible with that of Microsoft Visual Studio. The flag makes these changes to GCC's linkage model: 1. It sets the default visibility to "hidden", like -fvisibility=hidden. 2. Types, but not their members, are not hidden by default. 3. The One Definition Rule is relaxed for types without explicit visibility specifications that are defined in more than one shared object: those declarations are permitted if they are permitted when this option is not used. In new code it is better to use -fvisibility=hidden and export those classes that are intended to be externally visible. Unfortunately it is possible for code to rely, perhaps accidentally, on the Visual Studio behavior. Among the consequences of these changes are that static data members of the same type with the same name but defined in different shared objects are different, so changing one does not change the other; and that pointers to function members defined in different shared objects may not compare equal. When this flag is given, it is a violation of the ODR to define types with the same name differently. -fno-weak Do not use weak symbol support, even if it is provided by the linker. By default, G++ uses weak symbols if they are available. This option exists only for testing, and should not be used by end-users; it results in inferior code and has no benefits. This option may be removed in a future release of G++. -nostdinc++ Do not search for header files in the standard directories specific to C++, but do still search the other standard directories. (This option is used when building the C++ library.) In addition, these optimization, warning, and code generation options have meanings only for C++ programs: -Wabi (C, Objective-C, C++ and Objective-C++ only) Warn when G++ it generates code that is probably not compatible with the vendor-neutral C++ ABI. Since G++ now defaults to updating the ABI with each major release, normally -Wabi will warn only if there is a check added later in a release series for an ABI issue discovered since the initial release. -Wabi will warn about more things if an older ABI version is selected (with -fabi-version=n). -Wabi can also be used with an explicit version number to warn about compatibility with a particular -fabi-version level, e.g. -Wabi=2 to warn about changes relative to -fabi-version=2. If an explicit version number is provided and -fabi-compat-version is not specified, the version number from this option is used for compatibility aliases. If no explicit version number is provided with this option, but -fabi-compat-version is specified, that version number is used for ABI warnings. Although an effort has been made to warn about all such cases, there are probably some cases that are not warned about, even though G++ is generating incompatible code. There may also be cases where warnings are emitted even though the code that is generated is compatible. You should rewrite your code to avoid these warnings if you are concerned about the fact that code generated by G++ may not be binary compatible with code generated by other compilers. Known incompatibilities in -fabi-version=2 (which was the default from GCC 3.4 to 4.9) include: * A template with a non-type template parameter of reference type was mangled incorrectly: extern int N; template <int &> struct S {}; void n (S<N>) {2} This was fixed in -fabi-version=3. * SIMD vector types declared using "__attribute ((vector_size))" were mangled in a non-standard way that does not allow for overloading of functions taking vectors of different sizes. The mangling was changed in -fabi-version=4. * "__attribute ((const))" and "noreturn" were mangled as type qualifiers, and "decltype" of a plain declaration was folded away. These mangling issues were fixed in -fabi-version=5. * Scoped enumerators passed as arguments to a variadic function are promoted like unscoped enumerators, causing "va_arg" to complain. On most targets this does not actually affect the parameter passing ABI, as there is no way to pass an argument smaller than "int". Also, the ABI changed the mangling of template argument packs, "const_cast", "static_cast", prefix increment/decrement, and a class scope function used as a template argument. These issues were corrected in -fabi-version=6. * Lambdas in default argument scope were mangled incorrectly, and the ABI changed the mangling of "nullptr_t". These issues were corrected in -fabi-version=7. * When mangling a function type with function-cv- qualifiers, the un-qualified function type was incorrectly treated as a substitution candidate. This was fixed in -fabi-version=8, the default for GCC 5.1. * "decltype(nullptr)" incorrectly had an alignment of 1, leading to unaligned accesses. Note that this did not affect the ABI of a function with a "nullptr_t" parameter, as parameters have a minimum alignment. This was fixed in -fabi-version=9, the default for GCC 5.2. * Target-specific attributes that affect the identity of a type, such as ia32 calling conventions on a function type (stdcall, regparm, etc.), did not affect the mangled name, leading to name collisions when function pointers were used as template arguments. This was fixed in -fabi-version=10, the default for GCC 6.1. It also warns about psABI-related changes. The known psABI changes at this point include: * For SysV/x86-64, unions with "long double" members are passed in memory as specified in psABI. For example: union U { long double ld; int i; }; "union U" is always passed in memory. -Wabi-tag (C++ and Objective-C++ only) Warn when a type with an ABI tag is used in a context that does not have that ABI tag. See C++ Attributes for more information about ABI tags. -Wctor-dtor-privacy (C++ and Objective-C++ only) Warn when a class seems unusable because all the constructors or destructors in that class are private, and it has neither friends nor public static member functions. Also warn if there are no non-private methods, and there's at least one private member function that isn't a constructor or destructor. -Wdelete-non-virtual-dtor (C++ and Objective-C++ only) Warn when "delete" is used to destroy an instance of a class that has virtual functions and non-virtual destructor. It is unsafe to delete an instance of a derived class through a pointer to a base class if the base class does not have a virtual destructor. This warning is enabled by -Wall. -Wdeprecated-copy (C++ and Objective-C++ only) Warn that the implicit declaration of a copy constructor or copy assignment operator is deprecated if the class has a user-provided copy constructor or copy assignment operator, in C++11 and up. This warning is enabled by -Wextra. With -Wdeprecated-copy-dtor, also deprecate if the class has a user-provided destructor. -Wno-init-list-lifetime (C++ and Objective-C++ only) Do not warn about uses of "std::initializer_list" that are likely to result in dangling pointers. Since the underlying array for an "initializer_list" is handled like a normal C++ temporary object, it is easy to inadvertently keep a pointer to the array past the end of the array's lifetime. For example: * If a function returns a temporary "initializer_list", or a local "initializer_list" variable, the array's lifetime ends at the end of the return statement, so the value returned has a dangling pointer. * If a new-expression creates an "initializer_list", the array only lives until the end of the enclosing full- expression, so the "initializer_list" in the heap has a dangling pointer. * When an "initializer_list" variable is assigned from a brace-enclosed initializer list, the temporary array created for the right side of the assignment only lives until the end of the full-expression, so at the next statement the "initializer_list" variable has a dangling pointer. // li's initial underlying array lives as long as li std::initializer_list<int> li = { 1,2,3 }; // assignment changes li to point to a temporary array li = { 4, 5 }; // now the temporary is gone and li has a dangling pointer int i = li.begin()[0] // undefined behavior * When a list constructor stores the "begin" pointer from the "initializer_list" argument, this doesn't extend the lifetime of the array, so if a class variable is constructed from a temporary "initializer_list", the pointer is left dangling by the end of the variable declaration statement. -Wliteral-suffix (C++ and Objective-C++ only) Warn when a string or character literal is followed by a ud- suffix which does not begin with an underscore. As a conforming extension, GCC treats such suffixes as separate preprocessing tokens in order to maintain backwards compatibility with code that uses formatting macros from "<inttypes.h>". For example: #define __STDC_FORMAT_MACROS #include <inttypes.h> #include <stdio.h> int main() { int64_t i64 = 123; printf("My int64: %" PRId64"\n", i64); } In this case, "PRId64" is treated as a separate preprocessing token. Additionally, warn when a user-defined literal operator is declared with a literal suffix identifier that doesn't begin with an underscore. Literal suffix identifiers that don't begin with an underscore are reserved for future standardization. This warning is enabled by default. -Wlto-type-mismatch During the link-time optimization warn about type mismatches in global declarations from different compilation units. Requires -flto to be enabled. Enabled by default. -Wno-narrowing (C++ and Objective-C++ only) For C++11 and later standards, narrowing conversions are diagnosed by default, as required by the standard. A narrowing conversion from a constant produces an error, and a narrowing conversion from a non-constant produces a warning, but -Wno-narrowing suppresses the diagnostic. Note that this does not affect the meaning of well-formed code; narrowing conversions are still considered ill-formed in SFINAE contexts. With -Wnarrowing in C++98, warn when a narrowing conversion prohibited by C++11 occurs within { }, e.g. int i = { 2.2 }; // error: narrowing from double to int This flag is included in -Wall and -Wc++11-compat. -Wnoexcept (C++ and Objective-C++ only) Warn when a noexcept-expression evaluates to false because of a call to a function that does not have a non-throwing exception specification (i.e. "throw()" or "noexcept") but is known by the compiler to never throw an exception. -Wnoexcept-type (C++ and Objective-C++ only) Warn if the C++17 feature making "noexcept" part of a function type changes the mangled name of a symbol relative to C++14. Enabled by -Wabi and -Wc++17-compat. As an example: template <class T> void f(T t) { t(); }; void g() noexcept; void h() { f(g); } In C++14, "f" calls "f<void(*)()>", but in C++17 it calls "f<void(*)()noexcept>". -Wclass-memaccess (C++ and Objective-C++ only) Warn when the destination of a call to a raw memory function such as "memset" or "memcpy" is an object of class type, and when writing into such an object might bypass the class non- trivial or deleted constructor or copy assignment, violate const-correctness or encapsulation, or corrupt virtual table pointers. Modifying the representation of such objects may violate invariants maintained by member functions of the class. For example, the call to "memset" below is undefined because it modifies a non-trivial class object and is, therefore, diagnosed. The safe way to either initialize or clear the storage of objects of such types is by using the appropriate constructor or assignment operator, if one is available. std::string str = "abc"; memset (&str, 0, sizeof str); The -Wclass-memaccess option is enabled by -Wall. Explicitly casting the pointer to the class object to "void *" or to a type that can be safely accessed by the raw memory function suppresses the warning. -Wnon-virtual-dtor (C++ and Objective-C++ only) Warn when a class has virtual functions and an accessible non-virtual destructor itself or in an accessible polymorphic base class, in which case it is possible but unsafe to delete an instance of a derived class through a pointer to the class itself or base class. This warning is automatically enabled if -Weffc++ is specified. -Wregister (C++ and Objective-C++ only) Warn on uses of the "register" storage class specifier, except when it is part of the GNU Explicit Register Variables extension. The use of the "register" keyword as storage class specifier has been deprecated in C++11 and removed in C++17. Enabled by default with -std=c++17. -Wreorder (C++ and Objective-C++ only) Warn when the order of member initializers given in the code does not match the order in which they must be executed. For instance: struct A { int i; int j; A(): j (0), i (1) { } }; The compiler rearranges the member initializers for "i" and "j" to match the declaration order of the members, emitting a warning to that effect. This warning is enabled by -Wall. -Wno-pessimizing-move (C++ and Objective-C++ only) This warning warns when a call to "std::move" prevents copy elision. A typical scenario when copy elision can occur is when returning in a function with a class return type, when the expression being returned is the name of a non-volatile automatic object, and is not a function parameter, and has the same type as the function return type. struct T { ... }; T fn() { T t; ... return std::move (t); } But in this example, the "std::move" call prevents copy elision. This warning is enabled by -Wall. -Wno-redundant-move (C++ and Objective-C++ only) This warning warns about redundant calls to "std::move"; that is, when a move operation would have been performed even without the "std::move" call. This happens because the compiler is forced to treat the object as if it were an rvalue in certain situations such as returning a local variable, where copy elision isn't applicable. Consider: struct T { ... }; T fn(T t) { ... return std::move (t); } Here, the "std::move" call is redundant. Because G++ implements Core Issue 1579, another example is: struct T { // convertible to U ... }; struct U { ... }; U fn() { T t; ... return std::move (t); } In this example, copy elision isn't applicable because the type of the expression being returned and the function return type differ, yet G++ treats the return value as if it were designated by an rvalue. This warning is enabled by -Wextra. -fext-numeric-literals (C++ and Objective-C++ only) Accept imaginary, fixed-point, or machine-defined literal number suffixes as GNU extensions. When this option is turned off these suffixes are treated as C++11 user-defined literal numeric suffixes. This is on by default for all pre-C++11 dialects and all GNU dialects: -std=c++98, -std=gnu++98, -std=gnu++11, -std=gnu++14. This option is off by default for ISO C++11 onwards (-std=c++11, ...). The following -W... options are not affected by -Wall. -Weffc++ (C++ and Objective-C++ only) Warn about violations of the following style guidelines from Scott Meyers' Effective C++ series of books: * Define a copy constructor and an assignment operator for classes with dynamically-allocated memory. * Prefer initialization to assignment in constructors. * Have "operator=" return a reference to *this. * Don't try to return a reference when you must return an object. * Distinguish between prefix and postfix forms of increment and decrement operators. * Never overload "&&", "||", or ",". This option also enables -Wnon-virtual-dtor, which is also one of the effective C++ recommendations. However, the check is extended to warn about the lack of virtual destructor in accessible non-polymorphic bases classes too. When selecting this option, be aware that the standard library headers do not obey all of these guidelines; use grep -v to filter out those warnings. -Wstrict-null-sentinel (C++ and Objective-C++ only) Warn about the use of an uncasted "NULL" as sentinel. When compiling only with GCC this is a valid sentinel, as "NULL" is defined to "__null". Although it is a null pointer constant rather than a null pointer, it is guaranteed to be of the same size as a pointer. But this use is not portable across different compilers. -Wno-non-template-friend (C++ and Objective-C++ only) Disable warnings when non-template friend functions are declared within a template. In very old versions of GCC that predate implementation of the ISO standard, declarations such as friend int foo(int), where the name of the friend is an unqualified-id, could be interpreted as a particular specialization of a template function; the warning exists to diagnose compatibility problems, and is enabled by default. -Wold-style-cast (C++ and Objective-C++ only) Warn if an old-style (C-style) cast to a non-void type is used within a C++ program. The new-style casts ("dynamic_cast", "static_cast", "reinterpret_cast", and "const_cast") are less vulnerable to unintended effects and much easier to search for. -Woverloaded-virtual (C++ and Objective-C++ only) Warn when a function declaration hides virtual functions from a base class. For example, in: struct A { virtual void f(); }; struct B: public A { void f(int); }; the "A" class version of "f" is hidden in "B", and code like: B* b; b->f(); fails to compile. -Wno-pmf-conversions (C++ and Objective-C++ only) Disable the diagnostic for converting a bound pointer to member function to a plain pointer. -Wsign-promo (C++ and Objective-C++ only) Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size. Previous versions of G++ tried to preserve unsignedness, but the standard mandates the current behavior. -Wtemplates (C++ and Objective-C++ only) Warn when a primary template declaration is encountered. Some coding rules disallow templates, and this may be used to enforce that rule. The warning is inactive inside a system header file, such as the STL, so one can still use the STL. One may also instantiate or specialize templates. -Wmultiple-inheritance (C++ and Objective-C++ only) Warn when a class is defined with multiple direct base classes. Some coding rules disallow multiple inheritance, and this may be used to enforce that rule. The warning is inactive inside a system header file, such as the STL, so one can still use the STL. One may also define classes that indirectly use multiple inheritance. -Wvirtual-inheritance Warn when a class is defined with a virtual direct base class. Some coding rules disallow multiple inheritance, and this may be used to enforce that rule. The warning is inactive inside a system header file, such as the STL, so one can still use the STL. One may also define classes that indirectly use virtual inheritance. -Wnamespaces Warn when a namespace definition is opened. Some coding rules disallow namespaces, and this may be used to enforce that rule. The warning is inactive inside a system header file, such as the STL, so one can still use the STL. One may also use using directives and qualified names. -Wno-terminate (C++ and Objective-C++ only) Disable the warning about a throw-expression that will immediately result in a call to "terminate". -Wno-class-conversion (C++ and Objective-C++ only) Disable the warning about the case when a conversion function converts an object to the same type, to a base class of that type, or to void; such a conversion function will never be called. Options Controlling Objective-C and Objective-C++ Dialects (NOTE: This manual does not describe the Objective-C and Objective-C++ languages themselves. This section describes the command-line options that are only meaningful for Objective-C and Objective-C++ programs. You can also use most of the language-independent GNU compiler options. For example, you might compile a file some_class.m like this: gcc -g -fgnu-runtime -O -c some_class.m In this example, -fgnu-runtime is an option meant only for Objective-C and Objective-C++ programs; you can use the other options with any language supported by GCC. Note that since Objective-C is an extension of the C language, Objective-C compilations may also use options specific to the C front-end (e.g., -Wtraditional). Similarly, Objective-C++ compilations may use C++-specific options (e.g., -Wabi). Here is a list of options that are only for compiling Objective-C and Objective-C++ programs: -fconstant-string-class=class-name Use class-name as the name of the class to instantiate for each literal string specified with the syntax "@"..."". The default class name is "NXConstantString" if the GNU runtime is being used, and "NSConstantString" if the NeXT runtime is being used (see below). The -fconstant-cfstrings option, if also present, overrides the -fconstant-string-class setting and cause "@"..."" literals to be laid out as constant CoreFoundation strings. -fgnu-runtime Generate object code compatible with the standard GNU Objective-C runtime. This is the default for most types of systems. -fnext-runtime Generate output compatible with the NeXT runtime. This is the default for NeXT-based systems, including Darwin and Mac OS X. The macro "__NEXT_RUNTIME__" is predefined if (and only if) this option is used. -fno-nil-receivers Assume that all Objective-C message dispatches ("[receiver message:arg]") in this translation unit ensure that the receiver is not "nil". This allows for more efficient entry points in the runtime to be used. This option is only available in conjunction with the NeXT runtime and ABI version 0 or 1. -fobjc-abi-version=n Use version n of the Objective-C ABI for the selected runtime. This option is currently supported only for the NeXT runtime. In that case, Version 0 is the traditional (32-bit) ABI without support for properties and other Objective-C 2.0 additions. Version 1 is the traditional (32-bit) ABI with support for properties and other Objective- C 2.0 additions. Version 2 is the modern (64-bit) ABI. If nothing is specified, the default is Version 0 on 32-bit target machines, and Version 2 on 64-bit target machines. -fobjc-call-cxx-cdtors For each Objective-C class, check if any of its instance variables is a C++ object with a non-trivial default constructor. If so, synthesize a special "- (id) .cxx_construct" instance method which runs non-trivial default constructors on any such instance variables, in order, and then return "self". Similarly, check if any instance variable is a C++ object with a non-trivial destructor, and if so, synthesize a special "- (void) .cxx_destruct" method which runs all such default destructors, in reverse order. The "- (id) .cxx_construct" and "- (void) .cxx_destruct" methods thusly generated only operate on instance variables declared in the current Objective-C class, and not those inherited from superclasses. It is the responsibility of the Objective-C runtime to invoke all such methods in an object's inheritance hierarchy. The "- (id) .cxx_construct" methods are invoked by the runtime immediately after a new object instance is allocated; the "- (void) .cxx_destruct" methods are invoked immediately before the runtime deallocates an object instance. As of this writing, only the NeXT runtime on Mac OS X 10.4 and later has support for invoking the "- (id) .cxx_construct" and "- (void) .cxx_destruct" methods. -fobjc-direct-dispatch Allow fast jumps to the message dispatcher. On Darwin this is accomplished via the comm page. -fobjc-exceptions Enable syntactic support for structured exception handling in Objective-C, similar to what is offered by C++. This option is required to use the Objective-C keywords @try, @throw, @catch, @finally and @synchronized. This option is available with both the GNU runtime and the NeXT runtime (but not available in conjunction with the NeXT runtime on Mac OS X 10.2 and earlier). -fobjc-gc Enable garbage collection (GC) in Objective-C and Objective-C++ programs. This option is only available with the NeXT runtime; the GNU runtime has a different garbage collection implementation that does not require special compiler flags. -fobjc-nilcheck For the NeXT runtime with version 2 of the ABI, check for a nil receiver in method invocations before doing the actual method call. This is the default and can be disabled using -fno-objc-nilcheck. Class methods and super calls are never checked for nil in this way no matter what this flag is set to. Currently this flag does nothing when the GNU runtime, or an older version of the NeXT runtime ABI, is used. -fobjc-std=objc1 Conform to the language syntax of Objective-C 1.0, the language recognized by GCC 4.0. This only affects the Objective-C additions to the C/C++ language; it does not affect conformance to C/C++ standards, which is controlled by the separate C/C++ dialect option flags. When this option is used with the Objective-C or Objective-C++ compiler, any Objective-C syntax that is not recognized by GCC 4.0 is rejected. This is useful if you need to make sure that your Objective-C code can be compiled with older versions of GCC. -freplace-objc-classes Emit a special marker instructing ld(1) not to statically link in the resulting object file, and allow dyld(1) to load it in at run time instead. This is used in conjunction with the Fix-and-Continue debugging mode, where the object file in question may be recompiled and dynamically reloaded in the course of program execution, without the need to restart the program itself. Currently, Fix-and-Continue functionality is only available in conjunction with the NeXT runtime on Mac OS X 10.3 and later. -fzero-link When compiling for the NeXT runtime, the compiler ordinarily replaces calls to "objc_getClass("...")" (when the name of the class is known at compile time) with static class references that get initialized at load time, which improves run-time performance. Specifying the -fzero-link flag suppresses this behavior and causes calls to "objc_getClass("...")" to be retained. This is useful in Zero-Link debugging mode, since it allows for individual class implementations to be modified during program execution. The GNU runtime currently always retains calls to "objc_get_class("...")" regardless of command-line options. -fno-local-ivars By default instance variables in Objective-C can be accessed as if they were local variables from within the methods of the class they're declared in. This can lead to shadowing between instance variables and other variables declared either locally inside a class method or globally with the same name. Specifying the -fno-local-ivars flag disables this behavior thus avoiding variable shadowing issues. -fivar-visibility=[public|protected|private|package] Set the default instance variable visibility to the specified option so that instance variables declared outside the scope of any access modifier directives default to the specified visibility. -gen-decls Dump interface declarations for all classes seen in the source file to a file named sourcename.decl. -Wassign-intercept (Objective-C and Objective-C++ only) Warn whenever an Objective-C assignment is being intercepted by the garbage collector. -Wno-protocol (Objective-C and Objective-C++ only) If a class is declared to implement a protocol, a warning is issued for every method in the protocol that is not implemented by the class. The default behavior is to issue a warning for every method not explicitly implemented in the class, even if a method implementation is inherited from the superclass. If you use the -Wno-protocol option, then methods inherited from the superclass are considered to be implemented, and no warning is issued for them. -Wselector (Objective-C and Objective-C++ only) Warn if multiple methods of different types for the same selector are found during compilation. The check is performed on the list of methods in the final stage of compilation. Additionally, a check is performed for each selector appearing in a "@selector(...)" expression, and a corresponding method for that selector has been found during compilation. Because these checks scan the method table only at the end of compilation, these warnings are not produced if the final stage of compilation is not reached, for example because an error is found during compilation, or because the -fsyntax-only option is being used. -Wstrict-selector-match (Objective-C and Objective-C++ only) Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type "id" or "Class". When this flag is off (which is the default behavior), the compiler omits such warnings if any differences found are confined to types that share the same size and alignment. -Wundeclared-selector (Objective-C and Objective-C++ only) Warn if a "@selector(...)" expression referring to an undeclared selector is found. A selector is considered undeclared if no method with that name has been declared before the "@selector(...)" expression, either explicitly in an @interface or @protocol declaration, or implicitly in an @implementation section. This option always performs its checks as soon as a "@selector(...)" expression is found, while -Wselector only performs its checks in the final stage of compilation. This also enforces the coding style convention that methods and selectors must be declared before being used. -print-objc-runtime-info Generate C header describing the largest structure that is passed by value, if any. Options to Control Diagnostic Messages Formatting Traditionally, diagnostic messages have been formatted irrespective of the output device's aspect (e.g. its width, ...). You can use the options described below to control the formatting algorithm for diagnostic messages, e.g. how many characters per line, how often source location information should be reported. Note that some language front ends may not honor these options. -fmessage-length=n Try to format error messages so that they fit on lines of about n characters. If n is zero, then no line-wrapping is done; each error message appears on a single line. This is the default for all front ends. Note - this option also affects the display of the #error and #warning pre-processor directives, and the deprecated function/type/variable attribute. It does not however affect the pragma GCC warning and pragma GCC error pragmas. -fdiagnostics-show-location=once Only meaningful in line-wrapping mode. Instructs the diagnostic messages reporter to emit source location information once; that is, in case the message is too long to fit on a single physical line and has to be wrapped, the source location won't be emitted (as prefix) again, over and over, in subsequent continuation lines. This is the default behavior. -fdiagnostics-show-location=every-line Only meaningful in line-wrapping mode. Instructs the diagnostic messages reporter to emit the same source location information (as prefix) for physical lines that result from the process of breaking a message which is too long to fit on a single line. -fdiagnostics-color[=WHEN] -fno-diagnostics-color Use color in diagnostics. WHEN is never, always, or auto. The default depends on how the compiler has been configured, it can be any of the above WHEN options or also never if GCC_COLORS environment variable isn't present in the environment, and auto otherwise. auto means to use color only when the standard error is a terminal. The forms -fdiagnostics-color and -fno-diagnostics-color are aliases for -fdiagnostics-color=always and -fdiagnostics-color=never, respectively. The colors are defined by the environment variable GCC_COLORS. Its value is a colon-separated list of capabilities and Select Graphic Rendition (SGR) substrings. SGR commands are interpreted by the terminal or terminal emulator. (See the section in the documentation of your text terminal for permitted values and their meanings as character attributes.) These substring values are integers in decimal representation and can be concatenated with semicolons. Common values to concatenate include 1 for bold, 4 for underline, 5 for blink, 7 for inverse, 39 for default foreground color, 30 to 37 for foreground colors, 90 to 97 for 16-color mode foreground colors, 38;5;0 to 38;5;255 for 88-color and 256-color modes foreground colors, 49 for default background color, 40 to 47 for background colors, 100 to 107 for 16-color mode background colors, and 48;5;0 to 48;5;255 for 88-color and 256-color modes background colors. The default GCC_COLORS is error=01;31:warning=01;35:note=01;36:range1=32:range2=34:locus=01:\ quote=01:fixit-insert=32:fixit-delete=31:\ diff-filename=01:diff-hunk=32:diff-delete=31:diff-insert=32:\ type-diff=01;32 where 01;31 is bold red, 01;35 is bold magenta, 01;36 is bold cyan, 32 is green, 34 is blue, 01 is bold, and 31 is red. Setting GCC_COLORS to the empty string disables colors. Supported capabilities are as follows. "error=" SGR substring for error: markers. "warning=" SGR substring for warning: markers. "note=" SGR substring for note: markers. "range1=" SGR substring for first additional range. "range2=" SGR substring for second additional range. "locus=" SGR substring for location information, file:line or file:line:column etc. "quote=" SGR substring for information printed within quotes. "fixit-insert=" SGR substring for fix-it hints suggesting text to be inserted or replaced. "fixit-delete=" SGR substring for fix-it hints suggesting text to be deleted. "diff-filename=" SGR substring for filename headers within generated patches. "diff-hunk=" SGR substring for the starts of hunks within generated patches. "diff-delete=" SGR substring for deleted lines within generated patches. "diff-insert=" SGR substring for inserted lines within generated patches. "type-diff=" SGR substring for highlighting mismatching types within template arguments in the C++ frontend. -fno-diagnostics-show-option By default, each diagnostic emitted includes text indicating the command-line option that directly controls the diagnostic (if such an option is known to the diagnostic machinery). Specifying the -fno-diagnostics-show-option flag suppresses that behavior. -fno-diagnostics-show-caret By default, each diagnostic emitted includes the original source line and a caret ^ indicating the column. This option suppresses this information. The source line is truncated to n characters, if the -fmessage-length=n option is given. When the output is done to the terminal, the width is limited to the width given by the COLUMNS environment variable or, if not set, to the terminal width. -fno-diagnostics-show-labels By default, when printing source code (via -fdiagnostics-show-caret), diagnostics can label ranges of source code with pertinent information, such as the types of expressions: printf ("foo %s bar", long_i + long_j); ~^ ~~~~~~~~~~~~~~~ | | char * long int This option suppresses the printing of these labels (in the example above, the vertical bars and the "char *" and "long int" text). -fno-diagnostics-show-line-numbers By default, when printing source code (via -fdiagnostics-show-caret), a left margin is printed, showing line numbers. This option suppresses this left margin. -fdiagnostics-minimum-margin-width=width This option controls the minimum width of the left margin printed by -fdiagnostics-show-line-numbers. It defaults to 6. -fdiagnostics-parseable-fixits Emit fix-it hints in a machine-parseable format, suitable for consumption by IDEs. For each fix-it, a line will be printed after the relevant diagnostic, starting with the string "fix- it:". For example: fix-it:"test.c":{45:3-45:21}:"gtk_widget_show_all" The location is expressed as a half-open range, expressed as a count of bytes, starting at byte 1 for the initial column. In the above example, bytes 3 through 20 of line 45 of "test.c" are to be replaced with the given string: 00000000011111111112222222222 12345678901234567890123456789 gtk_widget_showall (dlg); ^^^^^^^^^^^^^^^^^^ gtk_widget_show_all The filename and replacement string escape backslash as "\\", tab as "\t", newline as "\n", double quotes as "\"", non- printable characters as octal (e.g. vertical tab as "\013"). An empty replacement string indicates that the given range is to be removed. An empty range (e.g. "45:3-45:3") indicates that the string is to be inserted at the given position. -fdiagnostics-generate-patch Print fix-it hints to stderr in unified diff format, after any diagnostics are printed. For example: --- test.c +++ test.c @ -42,5 +42,5 @ void show_cb(GtkDialog *dlg) { - gtk_widget_showall(dlg); + gtk_widget_show_all(dlg); } The diff may or may not be colorized, following the same rules as for diagnostics (see -fdiagnostics-color). -fdiagnostics-show-template-tree In the C++ frontend, when printing diagnostics showing mismatching template types, such as: could not convert 'std::map<int, std::vector<double> >()' from 'map<[...],vector<double>>' to 'map<[...],vector<float>> the -fdiagnostics-show-template-tree flag enables printing a tree-like structure showing the common and differing parts of the types, such as: map< [...], vector< [double != float]>> The parts that differ are highlighted with color ("double" and "float" in this case). -fno-elide-type By default when the C++ frontend prints diagnostics showing mismatching template types, common parts of the types are printed as "[...]" to simplify the error message. For example: could not convert 'std::map<int, std::vector<double> >()' from 'map<[...],vector<double>>' to 'map<[...],vector<float>> Specifying the -fno-elide-type flag suppresses that behavior. This flag also affects the output of the -fdiagnostics-show-template-tree flag. -fno-show-column Do not print column numbers in diagnostics. This may be necessary if diagnostics are being scanned by a program that does not understand the column numbers, such as dejagnu. -fdiagnostics-format=FORMAT Select a different format for printing diagnostics. FORMAT is text or json. The default is text. The json format consists of a top-level JSON array containing JSON objects representing the diagnostics. The JSON is emitted as one line, without formatting; the examples below have been formatted for clarity. Diagnostics can have child diagnostics. For example, this error and note: misleading-indentation.c:15:3: warning: this 'if' clause does not guard... [-Wmisleading-indentation] 15 | if (flag) | ^~ misleading-indentation.c:17:5: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the 'if' 17 | y = 2; | ^ might be printed in JSON form (after formatting) like this: [ { "kind": "warning", "locations": [ { "caret": { "column": 3, "file": "misleading-indentation.c", "line": 15 }, "finish": { "column": 4, "file": "misleading-indentation.c", "line": 15 } } ], "message": "this \u2018if\u2019 clause does not guard...", "option": "-Wmisleading-indentation", "children": [ { "kind": "note", "locations": [ { "caret": { "column": 5, "file": "misleading-indentation.c", "line": 17 } } ], "message": "...this statement, but the latter is ..." } ] }, ... ] where the "note" is a child of the "warning". A diagnostic has a "kind". If this is "warning", then there is an "option" key describing the command-line option controlling the warning. A diagnostic can contain zero or more locations. Each location has up to three positions within it: a "caret" position and optional "start" and "finish" positions. A location can also have an optional "label" string. For example, this error: bad-binary-ops.c:64:23: error: invalid operands to binary + (have 'S' {aka 'struct s'} and 'T' {aka 'struct t'}) 64 | return callee_4a () + callee_4b (); | ~~~~~~~~~~~~ ^ ~~~~~~~~~~~~ | | | | | T {aka struct t} | S {aka struct s} has three locations. Its primary location is at the "+" token at column 23. It has two secondary locations, describing the left and right-hand sides of the expression, which have labels. It might be printed in JSON form as: { "children": [], "kind": "error", "locations": [ { "caret": { "column": 23, "file": "bad-binary-ops.c", "line": 64 } }, { "caret": { "column": 10, "file": "bad-binary-ops.c", "line": 64 }, "finish": { "column": 21, "file": "bad-binary-ops.c", "line": 64 }, "label": "S {aka struct s}" }, { "caret": { "column": 25, "file": "bad-binary-ops.c", "line": 64 }, "finish": { "column": 36, "file": "bad-binary-ops.c", "line": 64 }, "label": "T {aka struct t}" } ], "message": "invalid operands to binary + ..." } If a diagnostic contains fix-it hints, it has a "fixits" array, consisting of half-open intervals, similar to the output of -fdiagnostics-parseable-fixits. For example, this diagnostic with a replacement fix-it hint: demo.c:8:15: error: 'struct s' has no member named 'colour'; did you mean 'color'? 8 | return ptr->colour; | ^~~~~~ | color might be printed in JSON form as: { "children": [], "fixits": [ { "next": { "column": 21, "file": "demo.c", "line": 8 }, "start": { "column": 15, "file": "demo.c", "line": 8 }, "string": "color" } ], "kind": "error", "locations": [ { "caret": { "column": 15, "file": "demo.c", "line": 8 }, "finish": { "column": 20, "file": "demo.c", "line": 8 } } ], "message": "\u2018struct s\u2019 has no member named ..." } where the fix-it hint suggests replacing the text from "start" up to but not including "next" with "string"'s value. Deletions are expressed via an empty value for "string", insertions by having "start" equal "next". Options to Request or Suppress Warnings Warnings are diagnostic messages that report constructions that are not inherently erroneous but that are risky or suggest there may have been an error. The following language-independent options do not enable specific warnings but control the kinds of diagnostics produced by GCC. -fsyntax-only Check the code for syntax errors, but don't do anything beyond that. -fmax-errors=n Limits the maximum number of error messages to n, at which point GCC bails out rather than attempting to continue processing the source code. If n is 0 (the default), there is no limit on the number of error messages produced. If -Wfatal-errors is also specified, then -Wfatal-errors takes precedence over this option. -w Inhibit all warning messages. -Werror Make all warnings into errors. -Werror= Make the specified warning into an error. The specifier for a warning is appended; for example -Werror=switch turns the warnings controlled by -Wswitch into errors. This switch takes a negative form, to be used to negate -Werror for specific warnings; for example -Wno-error=switch makes -Wswitch warnings not be errors, even when -Werror is in effect. The warning message for each controllable warning includes the option that controls the warning. That option can then be used with -Werror= and -Wno-error= as described above. (Printing of the option in the warning message can be disabled using the -fno-diagnostics-show-option flag.) Note that specifying -Werror=foo automatically implies -Wfoo. However, -Wno-error=foo does not imply anything. -Wfatal-errors This option causes the compiler to abort compilation on the first error occurred rather than trying to keep going and printing further error messages. You can request many specific warnings with options beginning with -W, for example -Wimplicit to request warnings on implicit declarations. Each of these specific warning options also has a negative form beginning -Wno- to turn off warnings; for example, -Wno-implicit. This manual lists only one of the two forms, whichever is not the default. For further language-specific options also refer to C++ Dialect Options and Objective-C and Objective-C++ Dialect Options. Some options, such as -Wall and -Wextra, turn on other options, such as -Wunused, which may turn on further options, such as -Wunused-value. The combined effect of positive and negative forms is that more specific options have priority over less specific ones, independently of their position in the command- line. For options of the same specificity, the last one takes effect. Options enabled or disabled via pragmas take effect as if they appeared at the end of the command-line. When an unrecognized warning option is requested (e.g., -Wunknown-warning), GCC emits a diagnostic stating that the option is not recognized. However, if the -Wno- form is used, the behavior is slightly different: no diagnostic is produced for -Wno-unknown-warning unless other diagnostics are being produced. This allows the use of new -Wno- options with old compilers, but if something goes wrong, the compiler warns that an unrecognized option is present. The effectiveness of some warnings depends on optimizations also being enabled. For example -Wsuggest-final-types is more effective with link-time optimization and -Wmaybe-uninitialized will not warn at all unless optimization is enabled. -Wpedantic -pedantic Issue all the warnings demanded by strict ISO C and ISO C++; reject all programs that use forbidden extensions, and some other programs that do not follow ISO C and ISO C++. For ISO C, follows the version of the ISO C standard specified by any -std option used. Valid ISO C and ISO C++ programs should compile properly with or without this option (though a rare few require -ansi or a -std option specifying the required version of ISO C). However, without this option, certain GNU extensions and traditional C and C++ features are supported as well. With this option, they are rejected. -Wpedantic does not cause warning messages for use of the alternate keywords whose names begin and end with __. Pedantic warnings are also disabled in the expression that follows "__extension__". However, only system header files should use these escape routes; application programs should avoid them. Some users try to use -Wpedantic to check programs for strict ISO C conformance. They soon find that it does not do quite what they want: it finds some non-ISO practices, but not all---only those for which ISO C requires a diagnostic, and some others for which diagnostics have been added. A feature to report any failure to conform to ISO C might be useful in some instances, but would require considerable additional work and would be quite different from -Wpedantic. We don't have plans to support such a feature in the near future. Where the standard specified with -std represents a GNU extended dialect of C, such as gnu90 or gnu99, there is a corresponding base standard, the version of ISO C on which the GNU extended dialect is based. Warnings from -Wpedantic are given where they are required by the base standard. (It does not make sense for such warnings to be given only for features not in the specified GNU C dialect, since by definition the GNU dialects of C include all features the compiler supports with the given option, and there would be nothing to warn about.) -pedantic-errors Give an error whenever the base standard (see -Wpedantic) requires a diagnostic, in some cases where there is undefined behavior at compile-time and in some other cases that do not prevent compilation of programs that are valid according to the standard. This is not equivalent to -Werror=pedantic, since there are errors enabled by this option and not enabled by the latter and vice versa. -Wall This enables all the warnings about constructions that some users consider questionable, and that are easy to avoid (or modify to prevent the warning), even in conjunction with macros. This also enables some language-specific warnings described in C++ Dialect Options and Objective-C and Objective-C++ Dialect Options. -Wall turns on the following warning flags: -Waddress -Warray-bounds=1 (only with -O2) -Wbool-compare -Wbool-operation -Wc++11-compat -Wc++14-compat -Wcatch-value (C++ and Objective-C++ only) -Wchar-subscripts -Wcomment -Wduplicate-decl-specifier (C and Objective-C only) -Wenum-compare (in C/ObjC; this is on by default in C++) -Wformat -Wint-in-bool-context -Wimplicit (C and Objective-C only) -Wimplicit-int (C and Objective-C only) -Wimplicit-function-declaration (C and Objective-C only) -Winit-self (only for C++) -Wlogical-not-parentheses -Wmain (only for C/ObjC and unless -ffreestanding) -Wmaybe-uninitialized -Wmemset-elt-size -Wmemset-transposed-args -Wmisleading-indentation (only for C/C++) -Wmissing-attributes -Wmissing-braces (only for C/ObjC) -Wmultistatement-macros -Wnarrowing (only for C++) -Wnonnull -Wnonnull-compare -Wopenmp-simd -Wparentheses -Wpessimizing-move (only for C++) -Wpointer-sign -Wreorder -Wrestrict -Wreturn-type -Wsequence-point -Wsign-compare (only in C++) -Wsizeof-pointer-div -Wsizeof-pointer-memaccess -Wstrict-aliasing -Wstrict-overflow=1 -Wswitch -Wtautological-compare -Wtrigraphs -Wuninitialized -Wunknown-pragmas -Wunused-function -Wunused-label -Wunused-value -Wunused-variable -Wvolatile-register-var Note that some warning flags are not implied by -Wall. Some of them warn about constructions that users generally do not consider questionable, but which occasionally you might wish to check for; others warn about constructions that are necessary or hard to avoid in some cases, and there is no simple way to modify the code to suppress the warning. Some of them are enabled by -Wextra but many of them must be enabled individually. -Wextra This enables some extra warning flags that are not enabled by -Wall. (This option used to be called -W. The older name is still supported, but the newer name is more descriptive.) -Wclobbered -Wcast-function-type -Wdeprecated-copy (C++ only) -Wempty-body -Wignored-qualifiers -Wimplicit-fallthrough=3 -Wmissing-field-initializers -Wmissing-parameter-type (C only) -Wold-style-declaration (C only) -Woverride-init -Wsign-compare (C only) -Wredundant-move (only for C++) -Wtype-limits -Wuninitialized -Wshift-negative-value (in C++11 to C++17 and in C99 and newer) -Wunused-parameter (only with -Wunused or -Wall) -Wunused-but-set-parameter (only with -Wunused or -Wall) The option -Wextra also prints warning messages for the following cases: * A pointer is compared against integer zero with "<", "<=", ">", or ">=". * (C++ only) An enumerator and a non-enumerator both appear in a conditional expression. * (C++ only) Ambiguous virtual bases. * (C++ only) Subscripting an array that has been declared "register". * (C++ only) Taking the address of a variable that has been declared "register". * (C++ only) A base class is not initialized in the copy constructor of a derived class. -Wchar-subscripts Warn if an array subscript has type "char". This is a common cause of error, as programmers often forget that this type is signed on some machines. This warning is enabled by -Wall. -Wno-coverage-mismatch Warn if feedback profiles do not match when using the -fprofile-use option. If a source file is changed between compiling with -fprofile-generate and with -fprofile-use, the files with the profile feedback can fail to match the source file and GCC cannot use the profile feedback information. By default, this warning is enabled and is treated as an error. -Wno-coverage-mismatch can be used to disable the warning or -Wno-error=coverage-mismatch can be used to disable the error. Disabling the error for this warning can result in poorly optimized code and is useful only in the case of very minor changes such as bug fixes to an existing code-base. Completely disabling the warning is not recommended. -Wno-cpp (C, Objective-C, C++, Objective-C++ and Fortran only) Suppress warning messages emitted by "#warning" directives. -Wdouble-promotion (C, C++, Objective-C and Objective-C++ only) Give a warning when a value of type "float" is implicitly promoted to "double". CPUs with a 32-bit "single-precision" floating-point unit implement "float" in hardware, but emulate "double" in software. On such a machine, doing computations using "double" values is much more expensive because of the overhead required for software emulation. It is easy to accidentally do computations with "double" because floating-point literals are implicitly of type "double". For example, in: float area(float radius) { return 3.14159 * radius * radius; } the compiler performs the entire computation with "double" because the floating-point literal is a "double". -Wduplicate-decl-specifier (C and Objective-C only) Warn if a declaration has duplicate "const", "volatile", "restrict" or "_Atomic" specifier. This warning is enabled by -Wall. -Wformat -Wformat=n Check calls to "printf" and "scanf", etc., to make sure that the arguments supplied have types appropriate to the format string specified, and that the conversions specified in the format string make sense. This includes standard functions, and others specified by format attributes, in the "printf", "scanf", "strftime" and "strfmon" (an X/Open extension, not in the C standard) families (or other target-specific families). Which functions are checked without format attributes having been specified depends on the standard version selected, and such checks of functions without the attribute specified are disabled by -ffreestanding or -fno-builtin. The formats are checked against the format features supported by GNU libc version 2.2. These include all ISO C90 and C99 features, as well as features from the Single Unix Specification and some BSD and GNU extensions. Other library implementations may not support all these features; GCC does not support warning about features that go beyond a particular library's limitations. However, if -Wpedantic is used with -Wformat, warnings are given about format features not in the selected standard version (but not for "strfmon" formats, since those are not in any version of the C standard). -Wformat=1 -Wformat Option -Wformat is equivalent to -Wformat=1, and -Wno-format is equivalent to -Wformat=0. Since -Wformat also checks for null format arguments for several functions, -Wformat also implies -Wnonnull. Some aspects of this level of format checking can be disabled by the options: -Wno-format-contains-nul, -Wno-format-extra-args, and -Wno-format-zero-length. -Wformat is enabled by -Wall. -Wno-format-contains-nul If -Wformat is specified, do not warn about format strings that contain NUL bytes. -Wno-format-extra-args If -Wformat is specified, do not warn about excess arguments to a "printf" or "scanf" format function. The C standard specifies that such arguments are ignored. Where the unused arguments lie between used arguments that are specified with $ operand number specifications, normally warnings are still given, since the implementation could not know what type to pass to "va_arg" to skip the unused arguments. However, in the case of "scanf" formats, this option suppresses the warning if the unused arguments are all pointers, since the Single Unix Specification says that such unused arguments are allowed. -Wformat-overflow -Wformat-overflow=level Warn about calls to formatted input/output functions such as "sprintf" and "vsprintf" that might overflow the destination buffer. When the exact number of bytes written by a format directive cannot be determined at compile-time it is estimated based on heuristics that depend on the level argument and on optimization. While enabling optimization will in most cases improve the accuracy of the warning, it may also result in false positives. -Wformat-overflow -Wformat-overflow=1 Level 1 of -Wformat-overflow enabled by -Wformat employs a conservative approach that warns only about calls that most likely overflow the buffer. At this level, numeric arguments to format directives with unknown values are assumed to have the value of one, and strings of unknown length to be empty. Numeric arguments that are known to be bounded to a subrange of their type, or string arguments whose output is bounded either by their directive's precision or by a finite set of string literals, are assumed to take on the value within the range that results in the most bytes on output. For example, the call to "sprintf" below is diagnosed because even with both a and b equal to zero, the terminating NUL character ('\0') appended by the function to the destination buffer will be written past its end. Increasing the size of the buffer by a single byte is sufficient to avoid the warning, though it may not be sufficient to avoid the overflow. void f (int a, int b) { char buf [13]; sprintf (buf, "a = %i, b = %i\n", a, b); } -Wformat-overflow=2 Level 2 warns also about calls that might overflow the destination buffer given an argument of sufficient length or magnitude. At level 2, unknown numeric arguments are assumed to have the minimum representable value for signed types with a precision greater than 1, and the maximum representable value otherwise. Unknown string arguments whose length cannot be assumed to be bounded either by the directive's precision, or by a finite set of string literals they may evaluate to, or the character array they may point to, are assumed to be 1 character long. At level 2, the call in the example above is again diagnosed, but this time because with a equal to a 32-bit "INT_MIN" the first %i directive will write some of its digits beyond the end of the destination buffer. To make the call safe regardless of the values of the two variables, the size of the destination buffer must be increased to at least 34 bytes. GCC includes the minimum size of the buffer in an informational note following the warning. An alternative to increasing the size of the destination buffer is to constrain the range of formatted values. The maximum length of string arguments can be bounded by specifying the precision in the format directive. When numeric arguments of format directives can be assumed to be bounded by less than the precision of their type, choosing an appropriate length modifier to the format specifier will reduce the required buffer size. For example, if a and b in the example above can be assumed to be within the precision of the "short int" type then using either the %hi format directive or casting the argument to "short" reduces the maximum required size of the buffer to 24 bytes. void f (int a, int b) { char buf [23]; sprintf (buf, "a = %hi, b = %i\n", a, (short)b); } -Wno-format-zero-length If -Wformat is specified, do not warn about zero-length formats. The C standard specifies that zero-length formats are allowed. -Wformat=2 Enable -Wformat plus additional format checks. Currently equivalent to -Wformat -Wformat-nonliteral -Wformat-security -Wformat-y2k. -Wformat-nonliteral If -Wformat is specified, also warn if the format string is not a string literal and so cannot be checked, unless the format function takes its format arguments as a "va_list". -Wformat-security If -Wformat is specified, also warn about uses of format functions that represent possible security problems. At present, this warns about calls to "printf" and "scanf" functions where the format string is not a string literal and there are no format arguments, as in "printf (foo);". This may be a security hole if the format string came from untrusted input and contains %n. (This is currently a subset of what -Wformat-nonliteral warns about, but in future warnings may be added to -Wformat-security that are not included in -Wformat-nonliteral.) -Wformat-signedness If -Wformat is specified, also warn if the format string requires an unsigned argument and the argument is signed and vice versa. -Wformat-truncation -Wformat-truncation=level Warn about calls to formatted input/output functions such as "snprintf" and "vsnprintf" that might result in output truncation. When the exact number of bytes written by a format directive cannot be determined at compile-time it is estimated based on heuristics that depend on the level argument and on optimization. While enabling optimization will in most cases improve the accuracy of the warning, it may also result in false positives. Except as noted otherwise, the option uses the same logic -Wformat-overflow. -Wformat-truncation -Wformat-truncation=1 Level 1 of -Wformat-truncation enabled by -Wformat employs a conservative approach that warns only about calls to bounded functions whose return value is unused and that will most likely result in output truncation. -Wformat-truncation=2 Level 2 warns also about calls to bounded functions whose return value is used and that might result in truncation given an argument of sufficient length or magnitude. -Wformat-y2k If -Wformat is specified, also warn about "strftime" formats that may yield only a two-digit year. -Wnonnull Warn about passing a null pointer for arguments marked as requiring a non-null value by the "nonnull" function attribute. -Wnonnull is included in -Wall and -Wformat. It can be disabled with the -Wno-nonnull option. -Wnonnull-compare Warn when comparing an argument marked with the "nonnull" function attribute against null inside the function. -Wnonnull-compare is included in -Wall. It can be disabled with the -Wno-nonnull-compare option. -Wnull-dereference Warn if the compiler detects paths that trigger erroneous or undefined behavior due to dereferencing a null pointer. This option is only active when -fdelete-null-pointer-checks is active, which is enabled by optimizations in most targets. The precision of the warnings depends on the optimization options used. -Winit-self (C, C++, Objective-C and Objective-C++ only) Warn about uninitialized variables that are initialized with themselves. Note this option can only be used with the -Wuninitialized option. For example, GCC warns about "i" being uninitialized in the following snippet only when -Winit-self has been specified: int f() { int i = i; return i; } This warning is enabled by -Wall in C++. -Wimplicit-int (C and Objective-C only) Warn when a declaration does not specify a type. This warning is enabled by -Wall. -Wimplicit-function-declaration (C and Objective-C only) Give a warning whenever a function is used before being declared. In C99 mode (-std=c99 or -std=gnu99), this warning is enabled by default and it is made into an error by -pedantic-errors. This warning is also enabled by -Wall. -Wimplicit (C and Objective-C only) Same as -Wimplicit-int and -Wimplicit-function-declaration. This warning is enabled by -Wall. -Wimplicit-fallthrough -Wimplicit-fallthrough is the same as -Wimplicit-fallthrough=3 and -Wno-implicit-fallthrough is the same as -Wimplicit-fallthrough=0. -Wimplicit-fallthrough=n Warn when a switch case falls through. For example: switch (cond) { case 1: a = 1; break; case 2: a = 2; case 3: a = 3; break; } This warning does not warn when the last statement of a case cannot fall through, e.g. when there is a return statement or a call to function declared with the noreturn attribute. -Wimplicit-fallthrough= also takes into account control flow statements, such as ifs, and only warns when appropriate. E.g. switch (cond) { case 1: if (i > 3) { bar (5); break; } else if (i < 1) { bar (0); } else return; default: ... } Since there are occasions where a switch case fall through is desirable, GCC provides an attribute, "__attribute__ ((fallthrough))", that is to be used along with a null statement to suppress this warning that would normally occur: switch (cond) { case 1: bar (0); __attribute__ ((fallthrough)); default: ... } C++17 provides a standard way to suppress the -Wimplicit-fallthrough warning using "[[fallthrough]];" instead of the GNU attribute. In C++11 or C++14 users can use "[[gnu::fallthrough]];", which is a GNU extension. Instead of these attributes, it is also possible to add a fallthrough comment to silence the warning. The whole body of the C or C++ style comment should match the given regular expressions listed below. The option argument n specifies what kind of comments are accepted: *<-Wimplicit-fallthrough=0 disables the warning altogether.> *<-Wimplicit-fallthrough=1 matches ".*" regular> expression, any comment is used as fallthrough comment. *<-Wimplicit-fallthrough=2 case insensitively matches> ".*falls?[ \t-]*thr(ough|u).*" regular expression. *<-Wimplicit-fallthrough=3 case sensitively matches one of the> following regular expressions: *<"-fallthrough"> *<"@fallthrough@"> *<"lint -fallthrough[ \t]*"> *<"[ \t.!]*(ELSE,? |INTENTIONAL(LY)? )?FALL(S | |-)?THR(OUGH|U)[ \t.!]*(-[^\n\r]*)?"> *<"[ \t.!]*(Else,? |Intentional(ly)? )?Fall((s | |-)[Tt]|t)hr(ough|u)[ \t.!]*(-[^\n\r]*)?"> *<"[ \t.!]*([Ee]lse,? |[Ii]ntentional(ly)? )?fall(s | |-)?thr(ough|u)[ \t.!]*(-[^\n\r]*)?"> *<-Wimplicit-fallthrough=4 case sensitively matches one of the> following regular expressions: *<"-fallthrough"> *<"@fallthrough@"> *<"lint -fallthrough[ \t]*"> *<"[ \t]*FALLTHR(OUGH|U)[ \t]*"> *<-Wimplicit-fallthrough=5 doesn't recognize any comments as> fallthrough comments, only attributes disable the warning. The comment needs to be followed after optional whitespace and other comments by "case" or "default" keywords or by a user label that precedes some "case" or "default" label. switch (cond) { case 1: bar (0); /* FALLTHRU */ default: ... } The -Wimplicit-fallthrough=3 warning is enabled by -Wextra. -Wif-not-aligned (C, C++, Objective-C and Objective-C++ only) Control if warning triggered by the "warn_if_not_aligned" attribute should be issued. This is enabled by default. Use -Wno-if-not-aligned to disable it. -Wignored-qualifiers (C and C++ only) Warn if the return type of a function has a type qualifier such as "const". For ISO C such a type qualifier has no effect, since the value returned by a function is not an lvalue. For C++, the warning is only emitted for scalar types or "void". ISO C prohibits qualified "void" return types on function definitions, so such return types always receive a warning even without this option. This warning is also enabled by -Wextra. -Wignored-attributes (C and C++ only) Warn when an attribute is ignored. This is different from the -Wattributes option in that it warns whenever the compiler decides to drop an attribute, not that the attribute is either unknown, used in a wrong place, etc. This warning is enabled by default. -Wmain Warn if the type of "main" is suspicious. "main" should be a function with external linkage, returning int, taking either zero arguments, two, or three arguments of appropriate types. This warning is enabled by default in C++ and is enabled by either -Wall or -Wpedantic. -Wmisleading-indentation (C and C++ only) Warn when the indentation of the code does not reflect the block structure. Specifically, a warning is issued for "if", "else", "while", and "for" clauses with a guarded statement that does not use braces, followed by an unguarded statement with the same indentation. In the following example, the call to "bar" is misleadingly indented as if it were guarded by the "if" conditional. if (some_condition ()) foo (); bar (); /* Gotcha: this is not guarded by the "if". */ In the case of mixed tabs and spaces, the warning uses the -ftabstop= option to determine if the statements line up (defaulting to 8). The warning is not issued for code involving multiline preprocessor logic such as the following example. if (flagA) foo (0); #if SOME_CONDITION_THAT_DOES_NOT_HOLD if (flagB) #endif foo (1); The warning is not issued after a "#line" directive, since this typically indicates autogenerated code, and no assumptions can be made about the layout of the file that the directive references. This warning is enabled by -Wall in C and C++. -Wmissing-attributes Warn when a declaration of a function is missing one or more attributes that a related function is declared with and whose absence may adversely affect the correctness or efficiency of generated code. For example, the warning is issued for declarations of aliases that use attributes to specify less restrictive requirements than those of their targets. This typically represents a potential optimization opportunity. By contrast, the -Wattribute-alias=2 option controls warnings issued when the alias is more restrictive than the target, which could lead to incorrect code generation. Attributes considered include "alloc_align", "alloc_size", "cold", "const", "hot", "leaf", "malloc", "nonnull", "noreturn", "nothrow", "pure", "returns_nonnull", and "returns_twice". In C++, the warning is issued when an explicit specialization of a primary template declared with attribute "alloc_align", "alloc_size", "assume_aligned", "format", "format_arg", "malloc", or "nonnull" is declared without it. Attributes "deprecated", "error", and "warning" suppress the warning.. You can use the "copy" attribute to apply the same set of attributes to a declaration as that on another declaration without explicitly enumerating the attributes. This attribute can be applied to declarations of functions, variables, or types. -Wmissing-attributes is enabled by -Wall. For example, since the declaration of the primary function template below makes use of both attribute "malloc" and "alloc_size" the declaration of the explicit specialization of the template is diagnosed because it is missing one of the attributes. template <class T> T* __attribute__ ((malloc, alloc_size (1))) allocate (size_t); template <> void* __attribute__ ((malloc)) // missing alloc_size allocate<void> (size_t); -Wmissing-braces Warn if an aggregate or union initializer is not fully bracketed. In the following example, the initializer for "a" is not fully bracketed, but that for "b" is fully bracketed. This warning is enabled by -Wall in C. int a[2][2] = { 0, 1, 2, 3 }; int b[2][2] = { { 0, 1 }, { 2, 3 } }; This warning is enabled by -Wall. -Wmissing-include-dirs (C, C++, Objective-C and Objective-C++ only) Warn if a user-supplied include directory does not exist. -Wmissing-profile Warn if feedback profiles are missing when using the -fprofile-use option. This option diagnoses those cases where a new function or a new file is added to the user code between compiling with -fprofile-generate and with -fprofile-use, without regenerating the profiles. In these cases, the profile feedback data files do not contain any profile feedback information for the newly added function or file respectively. Also, in the case when profile count data (.gcda) files are removed, GCC cannot use any profile feedback information. In all these cases, warnings are issued to inform the user that a profile generation step is due. -Wno-missing-profile can be used to disable the warning. Ignoring the warning can result in poorly optimized code. Completely disabling the warning is not recommended and should be done only when non-existent profile data is justified. -Wmultistatement-macros Warn about unsafe multiple statement macros that appear to be guarded by a clause such as "if", "else", "for", "switch", or "while", in which only the first statement is actually guarded after the macro is expanded. For example: #define DOIT x++; y++ if (c) DOIT; will increment "y" unconditionally, not just when "c" holds. The can usually be fixed by wrapping the macro in a do-while loop: #define DOIT do { x++; y++; } while (0) if (c) DOIT; This warning is enabled by -Wall in C and C++. -Wparentheses Warn if parentheses are omitted in certain contexts, such as when there is an assignment in a context where a truth value is expected, or when operators are nested whose precedence people often get confused about. Also warn if a comparison like "x<=y<=z" appears; this is equivalent to "(x<=y ? 1 : 0) <= z", which is a different interpretation from that of ordinary mathematical notation. Also warn for dangerous uses of the GNU extension to "?:" with omitted middle operand. When the condition in the "?": operator is a boolean expression, the omitted value is always 1. Often programmers expect it to be a value computed inside the conditional expression instead. For C++ this also warns for some cases of unnecessary parentheses in declarations, which can indicate an attempt at a function call instead of a declaration: { // Declares a local variable called mymutex. std::unique_lock<std::mutex> (mymutex); // User meant std::unique_lock<std::mutex> lock (mymutex); } This warning is enabled by -Wall. -Wsequence-point Warn about code that may have undefined semantics because of violations of sequence point rules in the C and C++ standards. The C and C++ standards define the order in which expressions in a C/C++ program are evaluated in terms of sequence points, which represent a partial ordering between the execution of parts of the program: those executed before the sequence point, and those executed after it. These occur after the evaluation of a full expression (one which is not part of a larger expression), after the evaluation of the first operand of a "&&", "||", "? :" or "," (comma) operator, before a function is called (but after the evaluation of its arguments and the expression denoting the called function), and in certain other places. Other than as expressed by the sequence point rules, the order of evaluation of subexpressions of an expression is not specified. All these rules describe only a partial order rather than a total order, since, for example, if two functions are called within one expression with no sequence point between them, the order in which the functions are called is not specified. However, the standards committee have ruled that function calls do not overlap. It is not specified when between sequence points modifications to the values of objects take effect. Programs whose behavior depends on this have undefined behavior; the C and C++ standards specify that "Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. Furthermore, the prior value shall be read only to determine the value to be stored.". If a program breaks these rules, the results on any particular implementation are entirely unpredictable. Examples of code with undefined behavior are "a = a++;", "a[n] = b[n++]" and "a[i++] = i;". Some more complicated cases are not diagnosed by this option, and it may give an occasional false positive result, but in general it has been found fairly effective at detecting this sort of problem in programs. The C++17 standard will define the order of evaluation of operands in more cases: in particular it requires that the right-hand side of an assignment be evaluated before the left-hand side, so the above examples are no longer undefined. But this warning will still warn about them, to help people avoid writing code that is undefined in C and earlier revisions of C++. The standard is worded confusingly, therefore there is some debate over the precise meaning of the sequence point rules in subtle cases. Links to discussions of the problem, including proposed formal definitions, may be found on the GCC readings page, at <http://gcc.gnu.org/readings.html >. This warning is enabled by -Wall for C and C++. -Wno-return-local-addr Do not warn about returning a pointer (or in C++, a reference) to a variable that goes out of scope after the function returns. -Wreturn-type Warn whenever a function is defined with a return type that defaults to "int". Also warn about any "return" statement with no return value in a function whose return type is not "void" (falling off the end of the function body is considered returning without a value). For C only, warn about a "return" statement with an expression in a function whose return type is "void", unless the expression type is also "void". As a GNU extension, the latter case is accepted without a warning unless -Wpedantic is used. Attempting to use the return value of a non-"void" function other than "main" that flows off the end by reaching the closing curly brace that terminates the function is undefined. Unlike in C, in C++, flowing off the end of a non-"void" function other than "main" results in undefined behavior even when the value of the function is not used. This warning is enabled by default in C++ and by -Wall otherwise. -Wshift-count-negative Warn if shift count is negative. This warning is enabled by default. -Wshift-count-overflow Warn if shift count >= width of type. This warning is enabled by default. -Wshift-negative-value Warn if left shifting a negative value. This warning is enabled by -Wextra in C99 (and newer) and C++11 to C++17 modes. -Wshift-overflow -Wshift-overflow=n Warn about left shift overflows. This warning is enabled by default in C99 and C++11 modes (and newer). -Wshift-overflow=1 This is the warning level of -Wshift-overflow and is enabled by default in C99 and C++11 modes (and newer). This warning level does not warn about left-shifting 1 into the sign bit. (However, in C, such an overflow is still rejected in contexts where an integer constant expression is required.) No warning is emitted in C++2A mode (and newer), as signed left shifts always wrap. -Wshift-overflow=2 This warning level also warns about left-shifting 1 into the sign bit, unless C++14 mode (or newer) is active. -Wswitch Warn whenever a "switch" statement has an index of enumerated type and lacks a "case" for one or more of the named codes of that enumeration. (The presence of a "default" label prevents this warning.) "case" labels outside the enumeration range also provoke warnings when this option is used (even if there is a "default" label). This warning is enabled by -Wall. -Wswitch-default Warn whenever a "switch" statement does not have a "default" case. -Wswitch-enum Warn whenever a "switch" statement has an index of enumerated type and lacks a "case" for one or more of the named codes of that enumeration. "case" labels outside the enumeration range also provoke warnings when this option is used. The only difference between -Wswitch and this option is that this option gives a warning about an omitted enumeration code even if there is a "default" label. -Wswitch-bool Warn whenever a "switch" statement has an index of boolean type and the case values are outside the range of a boolean type. It is possible to suppress this warning by casting the controlling expression to a type other than "bool". For example: switch ((int) (a == 4)) { ... } This warning is enabled by default for C and C++ programs. -Wswitch-unreachable Warn whenever a "switch" statement contains statements between the controlling expression and the first case label, which will never be executed. For example: switch (cond) { i = 15; ... case 5: ... } -Wswitch-unreachable does not warn if the statement between the controlling expression and the first case label is just a declaration: switch (cond) { int i; ... case 5: i = 5; ... } This warning is enabled by default for C and C++ programs. -Wsync-nand (C and C++ only) Warn when "__sync_fetch_and_nand" and "__sync_nand_and_fetch" built-in functions are used. These functions changed semantics in GCC 4.4. -Wunused-but-set-parameter Warn whenever a function parameter is assigned to, but otherwise unused (aside from its declaration). To suppress this warning use the "unused" attribute. This warning is also enabled by -Wunused together with -Wextra. -Wunused-but-set-variable Warn whenever a local variable is assigned to, but otherwise unused (aside from its declaration). This warning is enabled by -Wall. To suppress this warning use the "unused" attribute. This warning is also enabled by -Wunused, which is enabled by -Wall. -Wunused-function Warn whenever a static function is declared but not defined or a non-inline static function is unused. This warning is enabled by -Wall. -Wunused-label Warn whenever a label is declared but not used. This warning is enabled by -Wall. To suppress this warning use the "unused" attribute. -Wunused-local-typedefs (C, Objective-C, C++ and Objective-C++ only) Warn when a typedef locally defined in a function is not used. This warning is enabled by -Wall. -Wunused-parameter Warn whenever a function parameter is unused aside from its declaration. To suppress this warning use the "unused" attribute. -Wno-unused-result Do not warn if a caller of a function marked with attribute "warn_unused_result" does not use its return value. The default is -Wunused-result. -Wunused-variable Warn whenever a local or static variable is unused aside from its declaration. This option implies -Wunused-const-variable=1 for C, but not for C++. This warning is enabled by -Wall. To suppress this warning use the "unused" attribute. -Wunused-const-variable -Wunused-const-variable=n Warn whenever a constant static variable is unused aside from its declaration. -Wunused-const-variable=1 is enabled by -Wunused-variable for C, but not for C++. In C this declares variable storage, but in C++ this is not an error since const variables take the place of "#define"s. To suppress this warning use the "unused" attribute. -Wunused-const-variable=1 This is the warning level that is enabled by -Wunused-variable for C. It warns only about unused static const variables defined in the main compilation unit, but not about static const variables declared in any header included. -Wunused-const-variable=2 This warning level also warns for unused constant static variables in headers (excluding system headers). This is the warning level of -Wunused-const-variable and must be explicitly requested since in C++ this isn't an error and in C it might be harder to clean up all headers included. -Wunused-value Warn whenever a statement computes a result that is explicitly not used. To suppress this warning cast the unused expression to "void". This includes an expression-statement or the left-hand side of a comma expression that contains no side effects. For example, an expression such as "x[i,j]" causes a warning, while "x[(void)i,j]" does not. This warning is enabled by -Wall. -Wunused All the above -Wunused options combined. In order to get a warning about an unused function parameter, you must either specify -Wextra -Wunused (note that -Wall implies -Wunused), or separately specify -Wunused-parameter. -Wuninitialized Warn if an automatic variable is used without first being initialized or if a variable may be clobbered by a "setjmp" call. In C++, warn if a non-static reference or non-static "const" member appears in a class without constructors. If you want to warn about code that uses the uninitialized value of the variable in its own initializer, use the -Winit-self option. These warnings occur for individual uninitialized or clobbered elements of structure, union or array variables as well as for variables that are uninitialized or clobbered as a whole. They do not occur for variables or elements declared "volatile". Because these warnings depend on optimization, the exact variables or elements for which there are warnings depends on the precise optimization options and version of GCC used. Note that there may be no warning about a variable that is used only to compute a value that itself is never used, because such computations may be deleted by data flow analysis before the warnings are printed. -Winvalid-memory-model Warn for invocations of __atomic Builtins, __sync Builtins, and the C11 atomic generic functions with a memory consistency argument that is either invalid for the operation or outside the range of values of the "memory_order" enumeration. For example, since the "__atomic_store" and "__atomic_store_n" built-ins are only defined for the relaxed, release, and sequentially consistent memory orders the following code is diagnosed: void store (int *i) { __atomic_store_n (i, 0, memory_order_consume); } -Winvalid-memory-model is enabled by default. -Wmaybe-uninitialized For an automatic (i.e. local) variable, if there exists a path from the function entry to a use of the variable that is initialized, but there exist some other paths for which the variable is not initialized, the compiler emits a warning if it cannot prove the uninitialized paths are not executed at run time. These warnings are only possible in optimizing compilation, because otherwise GCC does not keep track of the state of variables. These warnings are made optional because GCC may not be able to determine when the code is correct in spite of appearing to have an error. Here is one example of how this can happen: { int x; switch (y) { case 1: x = 1; break; case 2: x = 4; break; case 3: x = 5; } foo (x); } If the value of "y" is always 1, 2 or 3, then "x" is always initialized, but GCC doesn't know this. To suppress the warning, you need to provide a default case with assert(0) or similar code. This option also warns when a non-volatile automatic variable might be changed by a call to "longjmp". The compiler sees only the calls to "setjmp". It cannot know where "longjmp" will be called; in fact, a signal handler could call it at any point in the code. As a result, you may get a warning even when there is in fact no problem because "longjmp" cannot in fact be called at the place that would cause a problem. Some spurious warnings can be avoided if you declare all the functions you use that never return as "noreturn". This warning is enabled by -Wall or -Wextra. -Wunknown-pragmas Warn when a "#pragma" directive is encountered that is not understood by GCC. If this command-line option is used, warnings are even issued for unknown pragmas in system header files. This is not the case if the warnings are only enabled by the -Wall command-line option. -Wno-pragmas Do not warn about misuses of pragmas, such as incorrect parameters, invalid syntax, or conflicts between pragmas. See also -Wunknown-pragmas. -Wno-prio-ctor-dtor Do not warn if a priority from 0 to 100 is used for constructor or destructor. The use of constructor and destructor attributes allow you to assign a priority to the constructor/destructor to control its order of execution before "main" is called or after it returns. The priority values must be greater than 100 as the compiler reserves priority values between 0--100 for the implementation. -Wstrict-aliasing This option is only active when -fstrict-aliasing is active. It warns about code that might break the strict aliasing rules that the compiler is using for optimization. The warning does not catch all cases, but does attempt to catch the more common pitfalls. It is included in -Wall. It is equivalent to -Wstrict-aliasing=3 -Wstrict-aliasing=n This option is only active when -fstrict-aliasing is active. It warns about code that might break the strict aliasing rules that the compiler is using for optimization. Higher levels correspond to higher accuracy (fewer false positives). Higher levels also correspond to more effort, similar to the way -O works. -Wstrict-aliasing is equivalent to -Wstrict-aliasing=3. Level 1: Most aggressive, quick, least accurate. Possibly useful when higher levels do not warn but -fstrict-aliasing still breaks the code, as it has very few false negatives. However, it has many false positives. Warns for all pointer conversions between possibly incompatible types, even if never dereferenced. Runs in the front end only. Level 2: Aggressive, quick, not too precise. May still have many false positives (not as many as level 1 though), and few false negatives (but possibly more than level 1). Unlike level 1, it only warns when an address is taken. Warns about incomplete types. Runs in the front end only. Level 3 (default for -Wstrict-aliasing): Should have very few false positives and few false negatives. Slightly slower than levels 1 or 2 when optimization is enabled. Takes care of the common pun+dereference pattern in the front end: "*(int*)&some_float". If optimization is enabled, it also runs in the back end, where it deals with multiple statement cases using flow-sensitive points-to information. Only warns when the converted pointer is dereferenced. Does not warn about incomplete types. -Wstrict-overflow -Wstrict-overflow=n This option is only active when signed overflow is undefined. It warns about cases where the compiler optimizes based on the assumption that signed overflow does not occur. Note that it does not warn about all cases where the code might overflow: it only warns about cases where the compiler implements some optimization. Thus this warning depends on the optimization level. An optimization that assumes that signed overflow does not occur is perfectly safe if the values of the variables involved are such that overflow never does, in fact, occur. Therefore this warning can easily give a false positive: a warning about code that is not actually a problem. To help focus on important issues, several warning levels are defined. No warnings are issued for the use of undefined signed overflow when estimating how many iterations a loop requires, in particular when determining whether a loop will be executed at all. -Wstrict-overflow=1 Warn about cases that are both questionable and easy to avoid. For example the compiler simplifies "x + 1 > x" to 1. This level of -Wstrict-overflow is enabled by -Wall; higher levels are not, and must be explicitly requested. -Wstrict-overflow=2 Also warn about other cases where a comparison is simplified to a constant. For example: "abs (x) >= 0". This can only be simplified when signed integer overflow is undefined, because "abs (INT_MIN)" overflows to "INT_MIN", which is less than zero. -Wstrict-overflow (with no level) is the same as -Wstrict-overflow=2. -Wstrict-overflow=3 Also warn about other cases where a comparison is simplified. For example: "x + 1 > 1" is simplified to "x > 0". -Wstrict-overflow=4 Also warn about other simplifications not covered by the above cases. For example: "(x * 10) / 5" is simplified to "x * 2". -Wstrict-overflow=5 Also warn about cases where the compiler reduces the magnitude of a constant involved in a comparison. For example: "x + 2 > y" is simplified to "x + 1 >= y". This is reported only at the highest warning level because this simplification applies to many comparisons, so this warning level gives a very large number of false positives. -Wstringop-overflow -Wstringop-overflow=type Warn for calls to string manipulation functions such as "memcpy" and "strcpy" that are determined to overflow the destination buffer. The optional argument is one greater than the type of Object Size Checking to perform to determine the size of the destination. The argument is meaningful only for functions that operate on character arrays but not for raw memory functions like "memcpy" which always make use of Object Size type-0. The option also warns for calls that specify a size in excess of the largest possible object or at most "SIZE_MAX / 2" bytes. The option produces the best results with optimization enabled but can detect a small subset of simple buffer overflows even without optimization in calls to the GCC built-in functions like "__builtin_memcpy" that correspond to the standard functions. In any case, the option warns about just a subset of buffer overflows detected by the corresponding overflow checking built-ins. For example, the option will issue a warning for the "strcpy" call below because it copies at least 5 characters (the string "blue" including the terminating NUL) into the buffer of size 4. enum Color { blue, purple, yellow }; const char* f (enum Color clr) { static char buf [4]; const char *str; switch (clr) { case blue: str = "blue"; break; case purple: str = "purple"; break; case yellow: str = "yellow"; break; } return strcpy (buf, str); // warning here } Option -Wstringop-overflow=2 is enabled by default. -Wstringop-overflow -Wstringop-overflow=1 The -Wstringop-overflow=1 option uses type-zero Object Size Checking to determine the sizes of destination objects. This is the default setting of the option. At this setting the option will not warn for writes past the end of subobjects of larger objects accessed by pointers unless the size of the largest surrounding object is known. When the destination may be one of several objects it is assumed to be the largest one of them. On Linux systems, when optimization is enabled at this setting the option warns for the same code as when the "_FORTIFY_SOURCE" macro is defined to a non-zero value. -Wstringop-overflow=2 The -Wstringop-overflow=2 option uses type-one Object Size Checking to determine the sizes of destination objects. At this setting the option will warn about overflows when writing to members of the largest complete objects whose exact size is known. It will, however, not warn for excessive writes to the same members of unknown objects referenced by pointers since they may point to arrays containing unknown numbers of elements. -Wstringop-overflow=3 The -Wstringop-overflow=3 option uses type-two Object Size Checking to determine the sizes of destination objects. At this setting the option warns about overflowing the smallest object or data member. This is the most restrictive setting of the option that may result in warnings for safe code. -Wstringop-overflow=4 The -Wstringop-overflow=4 option uses type-three Object Size Checking to determine the sizes of destination objects. At this setting the option will warn about overflowing any data members, and when the destination is one of several objects it uses the size of the largest of them to decide whether to issue a warning. Similarly to -Wstringop-overflow=3 this setting of the option may result in warnings for benign code. -Wstringop-truncation Warn for calls to bounded string manipulation functions such as "strncat", "strncpy", and "stpncpy" that may either truncate the copied string or leave the destination unchanged. In the following example, the call to "strncat" specifies a bound that is less than the length of the source string. As a result, the copy of the source will be truncated and so the call is diagnosed. To avoid the warning use "bufsize - strlen (buf) - 1)" as the bound. void append (char *buf, size_t bufsize) { strncat (buf, ".txt", 3); } As another example, the following call to "strncpy" results in copying to "d" just the characters preceding the terminating NUL, without appending the NUL to the end. Assuming the result of "strncpy" is necessarily a NUL- terminated string is a common mistake, and so the call is diagnosed. To avoid the warning when the result is not expected to be NUL-terminated, call "memcpy" instead. void copy (char *d, const char *s) { strncpy (d, s, strlen (s)); } In the following example, the call to "strncpy" specifies the size of the destination buffer as the bound. If the length of the source string is equal to or greater than this size the result of the copy will not be NUL-terminated. Therefore, the call is also diagnosed. To avoid the warning, specify "sizeof buf - 1" as the bound and set the last element of the buffer to "NUL". void copy (const char *s) { char buf[80]; strncpy (buf, s, sizeof buf); ... } In situations where a character array is intended to store a sequence of bytes with no terminating "NUL" such an array may be annotated with attribute "nonstring" to avoid this warning. Such arrays, however, are not suitable arguments to functions that expect "NUL"-terminated strings. To help detect accidental misuses of such arrays GCC issues warnings unless it can prove that the use is safe. -Wsuggest-attribute=[pure|const|noreturn|format|cold|malloc] Warn for cases where adding an attribute may be beneficial. The attributes currently supported are listed below. -Wsuggest-attribute=pure -Wsuggest-attribute=const -Wsuggest-attribute=noreturn -Wmissing-noreturn -Wsuggest-attribute=malloc Warn about functions that might be candidates for attributes "pure", "const" or "noreturn" or "malloc". The compiler only warns for functions visible in other compilation units or (in the case of "pure" and "const") if it cannot prove that the function returns normally. A function returns normally if it doesn't contain an infinite loop or return abnormally by throwing, calling "abort" or trapping. This analysis requires option -fipa-pure-const, which is enabled by default at -O and higher. Higher optimization levels improve the accuracy of the analysis. -Wsuggest-attribute=format -Wmissing-format-attribute Warn about function pointers that might be candidates for "format" attributes. Note these are only possible candidates, not absolute ones. GCC guesses that function pointers with "format" attributes that are used in assignment, initialization, parameter passing or return statements should have a corresponding "format" attribute in the resulting type. I.e. the left-hand side of the assignment or initialization, the type of the parameter variable, or the return type of the containing function respectively should also have a "format" attribute to avoid the warning. GCC also warns about function definitions that might be candidates for "format" attributes. Again, these are only possible candidates. GCC guesses that "format" attributes might be appropriate for any function that calls a function like "vprintf" or "vscanf", but this might not always be the case, and some functions for which "format" attributes are appropriate may not be detected. -Wsuggest-attribute=cold Warn about functions that might be candidates for "cold" attribute. This is based on static detection and generally will only warn about functions which always leads to a call to another "cold" function such as wrappers of C++ "throw" or fatal error reporting functions leading to "abort". -Wsuggest-final-types Warn about types with virtual methods where code quality would be improved if the type were declared with the C++11 "final" specifier, or, if possible, declared in an anonymous namespace. This allows GCC to more aggressively devirtualize the polymorphic calls. This warning is more effective with link time optimization, where the information about the class hierarchy graph is more complete. -Wsuggest-final-methods Warn about virtual methods where code quality would be improved if the method were declared with the C++11 "final" specifier, or, if possible, its type were declared in an anonymous namespace or with the "final" specifier. This warning is more effective with link-time optimization, where the information about the class hierarchy graph is more complete. It is recommended to first consider suggestions of -Wsuggest-final-types and then rebuild with new annotations. -Wsuggest-override Warn about overriding virtual functions that are not marked with the override keyword. -Walloc-zero Warn about calls to allocation functions decorated with attribute "alloc_size" that specify zero bytes, including those to the built-in forms of the functions "aligned_alloc", "alloca", "calloc", "malloc", and "realloc". Because the behavior of these functions when called with a zero size differs among implementations (and in the case of "realloc" has been deprecated) relying on it may result in subtle portability bugs and should be avoided. -Walloc-size-larger-than=byte-size Warn about calls to functions decorated with attribute "alloc_size" that attempt to allocate objects larger than the specified number of bytes, or where the result of the size computation in an integer type with infinite precision would exceed the value of PTRDIFF_MAX on the target. -Walloc-size-larger-than=PTRDIFF_MAX is enabled by default. Warnings controlled by the option can be disabled either by specifying byte-size of SIZE_MAX or more or by -Wno-alloc-size-larger-than. -Wno-alloc-size-larger-than Disable -Walloc-size-larger-than= warnings. The option is equivalent to -Walloc-size-larger-than=SIZE_MAX or larger. -Walloca This option warns on all uses of "alloca" in the source. -Walloca-larger-than=byte-size This option warns on calls to "alloca" with an integer argument whose value is either zero, or that is not bounded by a controlling predicate that limits its value to at most byte-size. It also warns for calls to "alloca" where the bound value is unknown. Arguments of non-integer types are considered unbounded even if they appear to be constrained to the expected range. For example, a bounded case of "alloca" could be: void func (size_t n) { void *p; if (n <= 1000) p = alloca (n); else p = malloc (n); f (p); } In the above example, passing "-Walloca-larger-than=1000" would not issue a warning because the call to "alloca" is known to be at most 1000 bytes. However, if "-Walloca-larger-than=500" were passed, the compiler would emit a warning. Unbounded uses, on the other hand, are uses of "alloca" with no controlling predicate constraining its integer argument. For example: void func () { void *p = alloca (n); f (p); } If "-Walloca-larger-than=500" were passed, the above would trigger a warning, but this time because of the lack of bounds checking. Note, that even seemingly correct code involving signed integers could cause a warning: void func (signed int n) { if (n < 500) { p = alloca (n); f (p); } } In the above example, n could be negative, causing a larger than expected argument to be implicitly cast into the "alloca" call. This option also warns when "alloca" is used in a loop. -Walloca-larger-than=PTRDIFF_MAX is enabled by default but is usually only effective when -ftree-vrp is active (default for -O2 and above). See also -Wvla-larger-than=byte-size. -Wno-alloca-larger-than Disable -Walloca-larger-than= warnings. The option is equivalent to -Walloca-larger-than=SIZE_MAX or larger. -Warray-bounds -Warray-bounds=n This option is only active when -ftree-vrp is active (default for -O2 and above). It warns about subscripts to arrays that are always out of bounds. This warning is enabled by -Wall. -Warray-bounds=1 This is the warning level of -Warray-bounds and is enabled by -Wall; higher levels are not, and must be explicitly requested. -Warray-bounds=2 This warning level also warns about out of bounds access for arrays at the end of a struct and for arrays accessed through pointers. This warning level may give a larger number of false positives and is deactivated by default. -Wattribute-alias=n -Wno-attribute-alias Warn about declarations using the "alias" and similar attributes whose target is incompatible with the type of the alias. -Wattribute-alias=1 The default warning level of the -Wattribute-alias option diagnoses incompatibilities between the type of the alias declaration and that of its target. Such incompatibilities are typically indicative of bugs. -Wattribute-alias=2 At this level -Wattribute-alias also diagnoses cases where the attributes of the alias declaration are more restrictive than the attributes applied to its target. These mismatches can potentially result in incorrect code generation. In other cases they may be benign and could be resolved simply by adding the missing attribute to the target. For comparison, see the -Wmissing-attributes option, which controls diagnostics when the alias declaration is less restrictive than the target, rather than more restrictive. Attributes considered include "alloc_align", "alloc_size", "cold", "const", "hot", "leaf", "malloc", "nonnull", "noreturn", "nothrow", "pure", "returns_nonnull", and "returns_twice". -Wattribute-alias is equivalent to -Wattribute-alias=1. This is the default. You can disable these warnings with either -Wno-attribute-alias or -Wattribute-alias=0. -Wbool-compare Warn about boolean expression compared with an integer value different from "true"/"false". For instance, the following comparison is always false: int n = 5; ... if ((n > 1) == 2) { ... } This warning is enabled by -Wall. -Wbool-operation Warn about suspicious operations on expressions of a boolean type. For instance, bitwise negation of a boolean is very likely a bug in the program. For C, this warning also warns about incrementing or decrementing a boolean, which rarely makes sense. (In C++, decrementing a boolean is always invalid. Incrementing a boolean is invalid in C++17, and deprecated otherwise.) This warning is enabled by -Wall. -Wduplicated-branches Warn when an if-else has identical branches. This warning detects cases like if (p != NULL) return 0; else return 0; It doesn't warn when both branches contain just a null statement. This warning also warn for conditional operators: int i = x ? *p : *p; -Wduplicated-cond Warn about duplicated conditions in an if-else-if chain. For instance, warn for the following code: if (p->q != NULL) { ... } else if (p->q != NULL) { ... } -Wframe-address Warn when the __builtin_frame_address or __builtin_return_address is called with an argument greater than 0. Such calls may return indeterminate values or crash the program. The warning is included in -Wall. -Wno-discarded-qualifiers (C and Objective-C only) Do not warn if type qualifiers on pointers are being discarded. Typically, the compiler warns if a "const char *" variable is passed to a function that takes a "char *" parameter. This option can be used to suppress such a warning. -Wno-discarded-array-qualifiers (C and Objective-C only) Do not warn if type qualifiers on arrays which are pointer targets are being discarded. Typically, the compiler warns if a "const int (*)[]" variable is passed to a function that takes a "int (*)[]" parameter. This option can be used to suppress such a warning. -Wno-incompatible-pointer-types (C and Objective-C only) Do not warn when there is a conversion between pointers that have incompatible types. This warning is for cases not covered by -Wno-pointer-sign, which warns for pointer argument passing or assignment with different signedness. -Wno-int-conversion (C and Objective-C only) Do not warn about incompatible integer to pointer and pointer to integer conversions. This warning is about implicit conversions; for explicit conversions the warnings -Wno-int-to-pointer-cast and -Wno-pointer-to-int-cast may be used. -Wno-div-by-zero Do not warn about compile-time integer division by zero. Floating-point division by zero is not warned about, as it can be a legitimate way of obtaining infinities and NaNs. -Wsystem-headers Print warning messages for constructs found in system header files. Warnings from system headers are normally suppressed, on the assumption that they usually do not indicate real problems and would only make the compiler output harder to read. Using this command-line option tells GCC to emit warnings from system headers as if they occurred in user code. However, note that using -Wall in conjunction with this option does not warn about unknown pragmas in system headers---for that, -Wunknown-pragmas must also be used. -Wtautological-compare Warn if a self-comparison always evaluates to true or false. This warning detects various mistakes such as: int i = 1; ... if (i > i) { ... } This warning also warns about bitwise comparisons that always evaluate to true or false, for instance: if ((a & 16) == 10) { ... } will always be false. This warning is enabled by -Wall. -Wtrampolines Warn about trampolines generated for pointers to nested functions. A trampoline is a small piece of data or code that is created at run time on the stack when the address of a nested function is taken, and is used to call the nested function indirectly. For some targets, it is made up of data only and thus requires no special treatment. But, for most targets, it is made up of code and thus requires the stack to be made executable in order for the program to work properly. -Wfloat-equal Warn if floating-point values are used in equality comparisons. The idea behind this is that sometimes it is convenient (for the programmer) to consider floating-point values as approximations to infinitely precise real numbers. If you are doing this, then you need to compute (by analyzing the code, or in some other way) the maximum or likely maximum error that the computation introduces, and allow for it when performing comparisons (and when producing output, but that's a different problem). In particular, instead of testing for equality, you should check to see whether the two values have ranges that overlap; and this is done with the relational operators, so equality comparisons are probably mistaken. -Wtraditional (C and Objective-C only) Warn about certain constructs that behave differently in traditional and ISO C. Also warn about ISO C constructs that have no traditional C equivalent, and/or problematic constructs that should be avoided. * Macro parameters that appear within string literals in the macro body. In traditional C macro replacement takes place within string literals, but in ISO C it does not. * In traditional C, some preprocessor directives did not exist. Traditional preprocessors only considered a line to be a directive if the # appeared in column 1 on the line. Therefore -Wtraditional warns about directives that traditional C understands but ignores because the # does not appear as the first character on the line. It also suggests you hide directives like "#pragma" not understood by traditional C by indenting them. Some traditional implementations do not recognize "#elif", so this option suggests avoiding it altogether. * A function-like macro that appears without arguments. * The unary plus operator. * The U integer constant suffix, or the F or L floating- point constant suffixes. (Traditional C does support the L suffix on integer constants.) Note, these suffixes appear in macros defined in the system headers of most modern systems, e.g. the _MIN/_MAX macros in "<limits.h>". Use of these macros in user code might normally lead to spurious warnings, however GCC's integrated preprocessor has enough context to avoid warning in these cases. * A function declared external in one block and then used after the end of the block. * A "switch" statement has an operand of type "long". * A non-"static" function declaration follows a "static" one. This construct is not accepted by some traditional C compilers. * The ISO type of an integer constant has a different width or signedness from its traditional type. This warning is only issued if the base of the constant is ten. I.e. hexadecimal or octal values, which typically represent bit patterns, are not warned about. * Usage of ISO string concatenation is detected. * Initialization of automatic aggregates. * Identifier conflicts with labels. Traditional C lacks a separate namespace for labels. * Initialization of unions. If the initializer is zero, the warning is omitted. This is done under the assumption that the zero initializer in user code appears conditioned on e.g. "__STDC__" to avoid missing initializer warnings and relies on default initialization to zero in the traditional C case. * Conversions by prototypes between fixed/floating-point values and vice versa. The absence of these prototypes when compiling with traditional C causes serious problems. This is a subset of the possible conversion warnings; for the full set use -Wtraditional-conversion. * Use of ISO C style function definitions. This warning intentionally is not issued for prototype declarations or variadic functions because these ISO C features appear in your code when using libiberty's traditional C compatibility macros, "PARAMS" and "VPARAMS". This warning is also bypassed for nested functions because that feature is already a GCC extension and thus not relevant to traditional C compatibility. -Wtraditional-conversion (C and Objective-C only) Warn if a prototype causes a type conversion that is different from what would happen to the same argument in the absence of a prototype. This includes conversions of fixed point to floating and vice versa, and conversions changing the width or signedness of a fixed-point argument except when the same as the default promotion. -Wdeclaration-after-statement (C and Objective-C only) Warn when a declaration is found after a statement in a block. This construct, known from C++, was introduced with ISO C99 and is by default allowed in GCC. It is not supported by ISO C90. -Wshadow Warn whenever a local variable or type declaration shadows another variable, parameter, type, class member (in C++), or instance variable (in Objective-C) or whenever a built-in function is shadowed. Note that in C++, the compiler warns if a local variable shadows an explicit typedef, but not if it shadows a struct/class/enum. Same as -Wshadow=global. -Wno-shadow-ivar (Objective-C only) Do not warn whenever a local variable shadows an instance variable in an Objective-C method. -Wshadow=global The default for -Wshadow. Warns for any (global) shadowing. -Wshadow=local Warn when a local variable shadows another local variable or parameter. This warning is enabled by -Wshadow=global. -Wshadow=compatible-local Warn when a local variable shadows another local variable or parameter whose type is compatible with that of the shadowing variable. In C++, type compatibility here means the type of the shadowing variable can be converted to that of the shadowed variable. The creation of this flag (in addition to -Wshadow=local) is based on the idea that when a local variable shadows another one of incompatible type, it is most likely intentional, not a bug or typo, as shown in the following example: for (SomeIterator i = SomeObj.begin(); i != SomeObj.end(); ++i) { for (int i = 0; i < N; ++i) { ... } ... } Since the two variable "i" in the example above have incompatible types, enabling only -Wshadow=compatible-local will not emit a warning. Because their types are incompatible, if a programmer accidentally uses one in place of the other, type checking will catch that and emit an error or warning. So not warning (about shadowing) in this case will not lead to undetected bugs. Use of this flag instead of -Wshadow=local can possibly reduce the number of warnings triggered by intentional shadowing. This warning is enabled by -Wshadow=local. -Wlarger-than=byte-size Warn whenever an object is defined whose size exceeds byte- size. -Wlarger-than=PTRDIFF_MAX is enabled by default. Warnings controlled by the option can be disabled either by specifying byte-size of SIZE_MAX or more or by -Wno-larger-than. -Wno-larger-than Disable -Wlarger-than= warnings. The option is equivalent to -Wlarger-than=SIZE_MAX or larger. -Wframe-larger-than=byte-size Warn if the size of a function frame exceeds byte-size. The computation done to determine the stack frame size is approximate and not conservative. The actual requirements may be somewhat greater than byte-size even if you do not get a warning. In addition, any space allocated via "alloca", variable-length arrays, or related constructs is not included by the compiler when determining whether or not to issue a warning. -Wframe-larger-than=PTRDIFF_MAX is enabled by default. Warnings controlled by the option can be disabled either by specifying byte-size of SIZE_MAX or more or by -Wno-frame-larger-than. -Wno-frame-larger-than Disable -Wframe-larger-than= warnings. The option is equivalent to -Wframe-larger-than=SIZE_MAX or larger. -Wno-free-nonheap-object Do not warn when attempting to free an object that was not allocated on the heap. -Wstack-usage=byte-size Warn if the stack usage of a function might exceed byte-size. The computation done to determine the stack usage is conservative. Any space allocated via "alloca", variable- length arrays, or related constructs is included by the compiler when determining whether or not to issue a warning. The message is in keeping with the output of -fstack-usage. * If the stack usage is fully static but exceeds the specified amount, it's: warning: stack usage is 1120 bytes * If the stack usage is (partly) dynamic but bounded, it's: warning: stack usage might be 1648 bytes * If the stack usage is (partly) dynamic and not bounded, it's: warning: stack usage might be unbounded -Wstack-usage=PTRDIFF_MAX is enabled by default. Warnings controlled by the option can be disabled either by specifying byte-size of SIZE_MAX or more or by -Wno-stack-usage. -Wno-stack-usage Disable -Wstack-usage= warnings. The option is equivalent to -Wstack-usage=SIZE_MAX or larger. -Wunsafe-loop-optimizations Warn if the loop cannot be optimized because the compiler cannot assume anything on the bounds of the loop indices. With -funsafe-loop-optimizations warn if the compiler makes such assumptions. -Wno-pedantic-ms-format (MinGW targets only) When used in combination with -Wformat and -pedantic without GNU extensions, this option disables the warnings about non- ISO "printf" / "scanf" format width specifiers "I32", "I64", and "I" used on Windows targets, which depend on the MS runtime. -Waligned-new Warn about a new-expression of a type that requires greater alignment than the "alignof(std::max_align_t)" but uses an allocation function without an explicit alignment parameter. This option is enabled by -Wall. Normally this only warns about global allocation functions, but -Waligned-new=all also warns about class member allocation functions. -Wplacement-new -Wplacement-new=n Warn about placement new expressions with undefined behavior, such as constructing an object in a buffer that is smaller than the type of the object. For example, the placement new expression below is diagnosed because it attempts to construct an array of 64 integers in a buffer only 64 bytes large. char buf [64]; new (buf) int[64]; This warning is enabled by default. -Wplacement-new=1 This is the default warning level of -Wplacement-new. At this level the warning is not issued for some strictly undefined constructs that GCC allows as extensions for compatibility with legacy code. For example, the following "new" expression is not diagnosed at this level even though it has undefined behavior according to the C++ standard because it writes past the end of the one- element array. struct S { int n, a[1]; }; S *s = (S *)malloc (sizeof *s + 31 * sizeof s->a[0]); new (s->a)int [32](); -Wplacement-new=2 At this level, in addition to diagnosing all the same constructs as at level 1, a diagnostic is also issued for placement new expressions that construct an object in the last member of structure whose type is an array of a single element and whose size is less than the size of the object being constructed. While the previous example would be diagnosed, the following construct makes use of the flexible member array extension to avoid the warning at level 2. struct S { int n, a[]; }; S *s = (S *)malloc (sizeof *s + 32 * sizeof s->a[0]); new (s->a)int [32](); -Wpointer-arith Warn about anything that depends on the "size of" a function type or of "void". GNU C assigns these types a size of 1, for convenience in calculations with "void *" pointers and pointers to functions. In C++, warn also when an arithmetic operation involves "NULL". This warning is also enabled by -Wpedantic. -Wpointer-compare Warn if a pointer is compared with a zero character constant. This usually means that the pointer was meant to be dereferenced. For example: const char *p = foo (); if (p == '\0') return 42; Note that the code above is invalid in C++11. This warning is enabled by default. -Wtype-limits Warn if a comparison is always true or always false due to the limited range of the data type, but do not warn for constant expressions. For example, warn if an unsigned variable is compared against zero with "<" or ">=". This warning is also enabled by -Wextra. -Wabsolute-value (C and Objective-C only) Warn for calls to standard functions that compute the absolute value of an argument when a more appropriate standard function is available. For example, calling "abs(3.14)" triggers the warning because the appropriate function to call to compute the absolute value of a double argument is "fabs". The option also triggers warnings when the argument in a call to such a function has an unsigned type. This warning can be suppressed with an explicit type cast and it is also enabled by -Wextra. -Wcomment -Wcomments Warn whenever a comment-start sequence /* appears in a /* comment, or whenever a backslash-newline appears in a // comment. This warning is enabled by -Wall. -Wtrigraphs Warn if any trigraphs are encountered that might change the meaning of the program. Trigraphs within comments are not warned about, except those that would form escaped newlines. This option is implied by -Wall. If -Wall is not given, this option is still enabled unless trigraphs are enabled. To get trigraph conversion without warnings, but get the other -Wall warnings, use -trigraphs -Wall -Wno-trigraphs. -Wundef Warn if an undefined identifier is evaluated in an "#if" directive. Such identifiers are replaced with zero. -Wexpansion-to-defined Warn whenever defined is encountered in the expansion of a macro (including the case where the macro is expanded by an #if directive). Such usage is not portable. This warning is also enabled by -Wpedantic and -Wextra. -Wunused-macros Warn about macros defined in the main file that are unused. A macro is used if it is expanded or tested for existence at least once. The preprocessor also warns if the macro has not been used at the time it is redefined or undefined. Built-in macros, macros defined on the command line, and macros defined in include files are not warned about. Note: If a macro is actually used, but only used in skipped conditional blocks, then the preprocessor reports it as unused. To avoid the warning in such a case, you might improve the scope of the macro's definition by, for example, moving it into the first skipped block. Alternatively, you could provide a dummy use with something like: #if defined the_macro_causing_the_warning #endif -Wno-endif-labels Do not warn whenever an "#else" or an "#endif" are followed by text. This sometimes happens in older programs with code of the form #if FOO ... #else FOO ... #endif FOO The second and third "FOO" should be in comments. This warning is on by default. -Wbad-function-cast (C and Objective-C only) Warn when a function call is cast to a non-matching type. For example, warn if a call to a function returning an integer type is cast to a pointer type. -Wc90-c99-compat (C and Objective-C only) Warn about features not present in ISO C90, but present in ISO C99. For instance, warn about use of variable length arrays, "long long" type, "bool" type, compound literals, designated initializers, and so on. This option is independent of the standards mode. Warnings are disabled in the expression that follows "__extension__". -Wc99-c11-compat (C and Objective-C only) Warn about features not present in ISO C99, but present in ISO C11. For instance, warn about use of anonymous structures and unions, "_Atomic" type qualifier, "_Thread_local" storage-class specifier, "_Alignas" specifier, "Alignof" operator, "_Generic" keyword, and so on. This option is independent of the standards mode. Warnings are disabled in the expression that follows "__extension__". -Wc11-c2x-compat (C and Objective-C only) Warn about features not present in ISO C11, but present in ISO C2X. For instance, warn about omitting the string in "_Static_assert". This option is independent of the standards mode. Warnings are disabled in the expression that follows "__extension__". -Wc++-compat (C and Objective-C only) Warn about ISO C constructs that are outside of the common subset of ISO C and ISO C++, e.g. request for implicit conversion from "void *" to a pointer to non-"void" type. -Wc++11-compat (C++ and Objective-C++ only) Warn about C++ constructs whose meaning differs between ISO C++ 1998 and ISO C++ 2011, e.g., identifiers in ISO C++ 1998 that are keywords in ISO C++ 2011. This warning turns on -Wnarrowing and is enabled by -Wall. -Wc++14-compat (C++ and Objective-C++ only) Warn about C++ constructs whose meaning differs between ISO C++ 2011 and ISO C++ 2014. This warning is enabled by -Wall. -Wc++17-compat (C++ and Objective-C++ only) Warn about C++ constructs whose meaning differs between ISO C++ 2014 and ISO C++ 2017. This warning is enabled by -Wall. -Wcast-qual Warn whenever a pointer is cast so as to remove a type qualifier from the target type. For example, warn if a "const char *" is cast to an ordinary "char *". Also warn when making a cast that introduces a type qualifier in an unsafe way. For example, casting "char **" to "const char **" is unsafe, as in this example: /* p is char ** value. */ const char **q = (const char **) p; /* Assignment of readonly string to const char * is OK. */ *q = "string"; /* Now char** pointer points to read-only memory. */ **p = 'b'; -Wcast-align Warn whenever a pointer is cast such that the required alignment of the target is increased. For example, warn if a "char *" is cast to an "int *" on machines where integers can only be accessed at two- or four-byte boundaries. -Wcast-align=strict Warn whenever a pointer is cast such that the required alignment of the target is increased. For example, warn if a "char *" is cast to an "int *" regardless of the target machine. -Wcast-function-type Warn when a function pointer is cast to an incompatible function pointer. In a cast involving function types with a variable argument list only the types of initial arguments that are provided are considered. Any parameter of pointer- type matches any other pointer-type. Any benign differences in integral types are ignored, like "int" vs. "long" on ILP32 targets. Likewise type qualifiers are ignored. The function type "void (*) (void)" is special and matches everything, which can be used to suppress this warning. In a cast involving pointer to member types this warning warns whenever the type cast is changing the pointer to member type. This warning is enabled by -Wextra. -Wwrite-strings When compiling C, give string constants the type "const char[length]" so that copying the address of one into a non-"const" "char *" pointer produces a warning. These warnings help you find at compile time code that can try to write into a string constant, but only if you have been very careful about using "const" in declarations and prototypes. Otherwise, it is just a nuisance. This is why we did not make -Wall request these warnings. When compiling C++, warn about the deprecated conversion from string literals to "char *". This warning is enabled by default for C++ programs. -Wcatch-value -Wcatch-value=n (C++ and Objective-C++ only) Warn about catch handlers that do not catch via reference. With -Wcatch-value=1 (or -Wcatch-value for short) warn about polymorphic class types that are caught by value. With -Wcatch-value=2 warn about all class types that are caught by value. With -Wcatch-value=3 warn about all types that are not caught by reference. -Wcatch-value is enabled by -Wall. -Wclobbered Warn for variables that might be changed by "longjmp" or "vfork". This warning is also enabled by -Wextra. -Wconditionally-supported (C++ and Objective-C++ only) Warn for conditionally-supported (C++11 [intro.defs]) constructs. -Wconversion Warn for implicit conversions that may alter a value. This includes conversions between real and integer, like "abs (x)" when "x" is "double"; conversions between signed and unsigned, like "unsigned ui = -1"; and conversions to smaller types, like "sqrtf (M_PI)". Do not warn for explicit casts like "abs ((int) x)" and "ui = (unsigned) -1", or if the value is not changed by the conversion like in "abs (2.0)". Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion. For C++, also warn for confusing overload resolution for user-defined conversions; and conversions that never use a type conversion operator: conversions to "void", the same type, a base class or a reference to them. Warnings about conversions between signed and unsigned integers are disabled by default in C++ unless -Wsign-conversion is explicitly enabled. -Wno-conversion-null (C++ and Objective-C++ only) Do not warn for conversions between "NULL" and non-pointer types. -Wconversion-null is enabled by default. -Wzero-as-null-pointer-constant (C++ and Objective-C++ only) Warn when a literal 0 is used as null pointer constant. This can be useful to facilitate the conversion to "nullptr" in C++11. -Wsubobject-linkage (C++ and Objective-C++ only) Warn if a class type has a base or a field whose type uses the anonymous namespace or depends on a type with no linkage. If a type A depends on a type B with no or internal linkage, defining it in multiple translation units would be an ODR violation because the meaning of B is different in each translation unit. If A only appears in a single translation unit, the best way to silence the warning is to give it internal linkage by putting it in an anonymous namespace as well. The compiler doesn't give this warning for types defined in the main .C file, as those are unlikely to have multiple definitions. -Wsubobject-linkage is enabled by default. -Wdangling-else Warn about constructions where there may be confusion to which "if" statement an "else" branch belongs. Here is an example of such a case: { if (a) if (b) foo (); else bar (); } In C/C++, every "else" branch belongs to the innermost possible "if" statement, which in this example is "if (b)". This is often not what the programmer expected, as illustrated in the above example by indentation the programmer chose. When there is the potential for this confusion, GCC issues a warning when this flag is specified. To eliminate the warning, add explicit braces around the innermost "if" statement so there is no way the "else" can belong to the enclosing "if". The resulting code looks like this: { if (a) { if (b) foo (); else bar (); } } This warning is enabled by -Wparentheses. -Wdate-time Warn when macros "__TIME__", "__DATE__" or "__TIMESTAMP__" are encountered as they might prevent bit-wise-identical reproducible compilations. -Wdelete-incomplete (C++ and Objective-C++ only) Warn when deleting a pointer to incomplete type, which may cause undefined behavior at runtime. This warning is enabled by default. -Wuseless-cast (C++ and Objective-C++ only) Warn when an expression is casted to its own type. -Wempty-body Warn if an empty body occurs in an "if", "else" or "do while" statement. This warning is also enabled by -Wextra. -Wenum-compare Warn about a comparison between values of different enumerated types. In C++ enumerated type mismatches in conditional expressions are also diagnosed and the warning is enabled by default. In C this warning is enabled by -Wall. -Wextra-semi (C++, Objective-C++ only) Warn about redundant semicolon after in-class function definition. -Wjump-misses-init (C, Objective-C only) Warn if a "goto" statement or a "switch" statement jumps forward across the initialization of a variable, or jumps backward to a label after the variable has been initialized. This only warns about variables that are initialized when they are declared. This warning is only supported for C and Objective-C; in C++ this sort of branch is an error in any case. -Wjump-misses-init is included in -Wc++-compat. It can be disabled with the -Wno-jump-misses-init option. -Wsign-compare Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. In C++, this warning is also enabled by -Wall. In C, it is also enabled by -Wextra. -Wsign-conversion Warn for implicit conversions that may change the sign of an integer value, like assigning a signed integer expression to an unsigned integer variable. An explicit cast silences the warning. In C, this option is enabled also by -Wconversion. -Wfloat-conversion Warn for implicit conversions that reduce the precision of a real value. This includes conversions from real to integer, and from higher precision real to lower precision real values. This option is also enabled by -Wconversion. -Wno-scalar-storage-order Do not warn on suspicious constructs involving reverse scalar storage order. -Wsized-deallocation (C++ and Objective-C++ only) Warn about a definition of an unsized deallocation function void operator delete (void *) noexcept; void operator delete[] (void *) noexcept; without a definition of the corresponding sized deallocation function void operator delete (void *, std::size_t) noexcept; void operator delete[] (void *, std::size_t) noexcept; or vice versa. Enabled by -Wextra along with -fsized-deallocation. -Wsizeof-pointer-div Warn for suspicious divisions of two sizeof expressions that divide the pointer size by the element size, which is the usual way to compute the array size but won't work out correctly with pointers. This warning warns e.g. about "sizeof (ptr) / sizeof (ptr[0])" if "ptr" is not an array, but a pointer. This warning is enabled by -Wall. -Wsizeof-pointer-memaccess Warn for suspicious length parameters to certain string and memory built-in functions if the argument uses "sizeof". This warning triggers for example for "memset (ptr, 0, sizeof (ptr));" if "ptr" is not an array, but a pointer, and suggests a possible fix, or about "memcpy (&foo, ptr, sizeof (&foo));". -Wsizeof-pointer-memaccess also warns about calls to bounded string copy functions like "strncat" or "strncpy" that specify as the bound a "sizeof" expression of the source array. For example, in the following function the call to "strncat" specifies the size of the source string as the bound. That is almost certainly a mistake and so the call is diagnosed. void make_file (const char *name) { char path[PATH_MAX]; strncpy (path, name, sizeof path - 1); strncat (path, ".text", sizeof ".text"); ... } The -Wsizeof-pointer-memaccess option is enabled by -Wall. -Wsizeof-array-argument Warn when the "sizeof" operator is applied to a parameter that is declared as an array in a function definition. This warning is enabled by default for C and C++ programs. -Wmemset-elt-size Warn for suspicious calls to the "memset" built-in function, if the first argument references an array, and the third argument is a number equal to the number of elements, but not equal to the size of the array in memory. This indicates that the user has omitted a multiplication by the element size. This warning is enabled by -Wall. -Wmemset-transposed-args Warn for suspicious calls to the "memset" built-in function where the second argument is not zero and the third argument is zero. For example, the call "memset (buf, sizeof buf, 0)" is diagnosed because "memset (buf, 0, sizeof buf)" was meant instead. The diagnostic is only emitted if the third argument is a literal zero. Otherwise, if it is an expression that is folded to zero, or a cast of zero to some type, it is far less likely that the arguments have been mistakenly transposed and no warning is emitted. This warning is enabled by -Wall. -Waddress Warn about suspicious uses of memory addresses. These include using the address of a function in a conditional expression, such as "void func(void); if (func)", and comparisons against the memory address of a string literal, such as "if (x == "abc")". Such uses typically indicate a programmer error: the address of a function always evaluates to true, so their use in a conditional usually indicate that the programmer forgot the parentheses in a function call; and comparisons against string literals result in unspecified behavior and are not portable in C, so they usually indicate that the programmer intended to use "strcmp". This warning is enabled by -Wall. -Waddress-of-packed-member Warn when the address of packed member of struct or union is taken, which usually results in an unaligned pointer value. This is enabled by default. -Wlogical-op Warn about suspicious uses of logical operators in expressions. This includes using logical operators in contexts where a bit-wise operator is likely to be expected. Also warns when the operands of a logical operator are the same: extern int a; if (a < 0 && a < 0) { ... } -Wlogical-not-parentheses Warn about logical not used on the left hand side operand of a comparison. This option does not warn if the right operand is considered to be a boolean expression. Its purpose is to detect suspicious code like the following: int a; ... if (!a > 1) { ... } It is possible to suppress the warning by wrapping the LHS into parentheses: if ((!a) > 1) { ... } This warning is enabled by -Wall. -Waggregate-return Warn if any functions that return structures or unions are defined or called. (In languages where you can return an array, this also elicits a warning.) -Wno-aggressive-loop-optimizations Warn if in a loop with constant number of iterations the compiler detects undefined behavior in some statement during one or more of the iterations. -Wno-attributes Do not warn if an unexpected "__attribute__" is used, such as unrecognized attributes, function attributes applied to variables, etc. This does not stop errors for incorrect use of supported attributes. -Wno-builtin-declaration-mismatch Warn if a built-in function is declared with an incompatible signature or as a non-function, or when a built-in function declared with a type that does not include a prototype is called with arguments whose promoted types do not match those expected by the function. When -Wextra is specified, also warn when a built-in function that takes arguments is declared without a prototype. The -Wno-builtin-declaration-mismatch warning is enabled by default. To avoid the warning include the appropriate header to bring the prototypes of built-in functions into scope. For example, the call to "memset" below is diagnosed by the warning because the function expects a value of type "size_t" as its argument but the type of 32 is "int". With -Wextra, the declaration of the function is diagnosed as well. extern void* memset (); void f (void *d) { memset (d, '\0', 32); } -Wno-builtin-macro-redefined Do not warn if certain built-in macros are redefined. This suppresses warnings for redefinition of "__TIMESTAMP__", "__TIME__", "__DATE__", "__FILE__", and "__BASE_FILE__". -Wstrict-prototypes (C and Objective-C only) Warn if a function is declared or defined without specifying the argument types. (An old-style function definition is permitted without a warning if preceded by a declaration that specifies the argument types.) -Wold-style-declaration (C and Objective-C only) Warn for obsolescent usages, according to the C Standard, in a declaration. For example, warn if storage-class specifiers like "static" are not the first things in a declaration. This warning is also enabled by -Wextra. -Wold-style-definition (C and Objective-C only) Warn if an old-style function definition is used. A warning is given even if there is a previous prototype. -Wmissing-parameter-type (C and Objective-C only) A function parameter is declared without a type specifier in K&R-style functions: void foo(bar) { } This warning is also enabled by -Wextra. -Wmissing-prototypes (C and Objective-C only) Warn if a global function is defined without a previous prototype declaration. This warning is issued even if the definition itself provides a prototype. Use this option to detect global functions that do not have a matching prototype declaration in a header file. This option is not valid for C++ because all function declarations provide prototypes and a non-matching declaration declares an overload rather than conflict with an earlier declaration. Use -Wmissing-declarations to detect missing declarations in C++. -Wmissing-declarations Warn if a global function is defined without a previous declaration. Do so even if the definition itself provides a prototype. Use this option to detect global functions that are not declared in header files. In C, no warnings are issued for functions with previous non-prototype declarations; use -Wmissing-prototypes to detect missing prototypes. In C++, no warnings are issued for function templates, or for inline functions, or for functions in anonymous namespaces. -Wmissing-field-initializers Warn if a structure's initializer has some fields missing. For example, the following code causes such a warning, because "x.h" is implicitly zero: struct s { int f, g, h; }; struct s x = { 3, 4 }; This option does not warn about designated initializers, so the following modification does not trigger a warning: struct s { int f, g, h; }; struct s x = { .f = 3, .g = 4 }; In C this option does not warn about the universal zero initializer { 0 }: struct s { int f, g, h; }; struct s x = { 0 }; Likewise, in C++ this option does not warn about the empty { } initializer, for example: struct s { int f, g, h; }; s x = { }; This warning is included in -Wextra. To get other -Wextra warnings without this one, use -Wextra -Wno-missing-field-initializers. -Wno-multichar Do not warn if a multicharacter constant ('FOOF') is used. Usually they indicate a typo in the user's code, as they have implementation-defined values, and should not be used in portable code. -Wnormalized=[none|id|nfc|nfkc] In ISO C and ISO C++, two identifiers are different if they are different sequences of characters. However, sometimes when characters outside the basic ASCII character set are used, you can have two different character sequences that look the same. To avoid confusion, the ISO 10646 standard sets out some normalization rules which when applied ensure that two sequences that look the same are turned into the same sequence. GCC can warn you if you are using identifiers that have not been normalized; this option controls that warning. There are four levels of warning supported by GCC. The default is -Wnormalized=nfc, which warns about any identifier that is not in the ISO 10646 "C" normalized form, NFC. NFC is the recommended form for most uses. It is equivalent to -Wnormalized. Unfortunately, there are some characters allowed in identifiers by ISO C and ISO C++ that, when turned into NFC, are not allowed in identifiers. That is, there's no way to use these symbols in portable ISO C or C++ and have all your identifiers in NFC. -Wnormalized=id suppresses the warning for these characters. It is hoped that future versions of the standards involved will correct this, which is why this option is not the default. You can switch the warning off for all characters by writing -Wnormalized=none or -Wno-normalized. You should only do this if you are using some other normalization scheme (like "D"), because otherwise you can easily create bugs that are literally impossible to see. Some characters in ISO 10646 have distinct meanings but look identical in some fonts or display methodologies, especially once formatting has been applied. For instance "\u207F", "SUPERSCRIPT LATIN SMALL LETTER N", displays just like a regular "n" that has been placed in a superscript. ISO 10646 defines the NFKC normalization scheme to convert all these into a standard form as well, and GCC warns if your code is not in NFKC if you use -Wnormalized=nfkc. This warning is comparable to warning about every identifier that contains the letter O because it might be confused with the digit 0, and so is not the default, but may be useful as a local coding convention if the programming environment cannot be fixed to display these characters distinctly. -Wno-attribute-warning Do not warn about usage of functions declared with "warning" attribute. By default, this warning is enabled. -Wno-attribute-warning can be used to disable the warning or -Wno-error=attribute-warning can be used to disable the error when compiled with -Werror flag. -Wno-deprecated Do not warn about usage of deprecated features. -Wno-deprecated-declarations Do not warn about uses of functions, variables, and types marked as deprecated by using the "deprecated" attribute. -Wno-overflow Do not warn about compile-time overflow in constant expressions. -Wno-odr Warn about One Definition Rule violations during link-time optimization. Requires -flto-odr-type-merging to be enabled. Enabled by default. -Wopenmp-simd Warn if the vectorizer cost model overrides the OpenMP simd directive set by user. The -fsimd-cost-model=unlimited option can be used to relax the cost model. -Woverride-init (C and Objective-C only) Warn if an initialized field without side effects is overridden when using designated initializers. This warning is included in -Wextra. To get other -Wextra warnings without this one, use -Wextra -Wno-override-init. -Woverride-init-side-effects (C and Objective-C only) Warn if an initialized field with side effects is overridden when using designated initializers. This warning is enabled by default. -Wpacked Warn if a structure is given the packed attribute, but the packed attribute has no effect on the layout or size of the structure. Such structures may be mis-aligned for little benefit. For instance, in this code, the variable "f.x" in "struct bar" is misaligned even though "struct bar" does not itself have the packed attribute: struct foo { int x; char a, b, c, d; } __attribute__((packed)); struct bar { char z; struct foo f; }; -Wpacked-bitfield-compat The 4.1, 4.2 and 4.3 series of GCC ignore the "packed" attribute on bit-fields of type "char". This has been fixed in GCC 4.4 but the change can lead to differences in the structure layout. GCC informs you when the offset of such a field has changed in GCC 4.4. For example there is no longer a 4-bit padding between field "a" and "b" in this structure: struct foo { char a:4; char b:8; } __attribute__ ((packed)); This warning is enabled by default. Use -Wno-packed-bitfield-compat to disable this warning. -Wpacked-not-aligned (C, C++, Objective-C and Objective-C++ only) Warn if a structure field with explicitly specified alignment in a packed struct or union is misaligned. For example, a warning will be issued on "struct S", like, "warning: alignment 1 of 'struct S' is less than 8", in this code: struct __attribute__ ((aligned (8))) S8 { char a[8]; }; struct __attribute__ ((packed)) S { struct S8 s8; }; This warning is enabled by -Wall. -Wpadded Warn if padding is included in a structure, either to align an element of the structure or to align the whole structure. Sometimes when this happens it is possible to rearrange the fields of the structure to reduce the padding and so make the structure smaller. -Wredundant-decls Warn if anything is declared more than once in the same scope, even in cases where multiple declaration is valid and changes nothing. -Wno-restrict Warn when an object referenced by a "restrict"-qualified parameter (or, in C++, a "__restrict"-qualified parameter) is aliased by another argument, or when copies between such objects overlap. For example, the call to the "strcpy" function below attempts to truncate the string by replacing its initial characters with the last four. However, because the call writes the terminating NUL into "a[4]", the copies overlap and the call is diagnosed. void foo (void) { char a[] = "abcd1234"; strcpy (a, a + 4); ... } The -Wrestrict option detects some instances of simple overlap even without optimization but works best at -O2 and above. It is included in -Wall. -Wnested-externs (C and Objective-C only) Warn if an "extern" declaration is encountered within a function. -Wno-inherited-variadic-ctor Suppress warnings about use of C++11 inheriting constructors when the base class inherited from has a C variadic constructor; the warning is on by default because the ellipsis is not inherited. -Winline Warn if a function that is declared as inline cannot be inlined. Even with this option, the compiler does not warn about failures to inline functions declared in system headers. The compiler uses a variety of heuristics to determine whether or not to inline a function. For example, the compiler takes into account the size of the function being inlined and the amount of inlining that has already been done in the current function. Therefore, seemingly insignificant changes in the source program can cause the warnings produced by -Winline to appear or disappear. -Wno-invalid-offsetof (C++ and Objective-C++ only) Suppress warnings from applying the "offsetof" macro to a non-POD type. According to the 2014 ISO C++ standard, applying "offsetof" to a non-standard-layout type is undefined. In existing C++ implementations, however, "offsetof" typically gives meaningful results. This flag is for users who are aware that they are writing nonportable code and who have deliberately chosen to ignore the warning about it. The restrictions on "offsetof" may be relaxed in a future version of the C++ standard. -Wint-in-bool-context Warn for suspicious use of integer values where boolean values are expected, such as conditional expressions (?:) using non-boolean integer constants in boolean context, like "if (a <= b ? 2 : 3)". Or left shifting of signed integers in boolean context, like "for (a = 0; 1 << a; a++);". Likewise for all kinds of multiplications regardless of the data type. This warning is enabled by -Wall. -Wno-int-to-pointer-cast Suppress warnings from casts to pointer type of an integer of a different size. In C++, casting to a pointer type of smaller size is an error. Wint-to-pointer-cast is enabled by default. -Wno-pointer-to-int-cast (C and Objective-C only) Suppress warnings from casts from a pointer to an integer type of a different size. -Winvalid-pch Warn if a precompiled header is found in the search path but cannot be used. -Wlong-long Warn if "long long" type is used. This is enabled by either -Wpedantic or -Wtraditional in ISO C90 and C++98 modes. To inhibit the warning messages, use -Wno-long-long. -Wvariadic-macros Warn if variadic macros are used in ISO C90 mode, or if the GNU alternate syntax is used in ISO C99 mode. This is enabled by either -Wpedantic or -Wtraditional. To inhibit the warning messages, use -Wno-variadic-macros. -Wvarargs Warn upon questionable usage of the macros used to handle variable arguments like "va_start". This is default. To inhibit the warning messages, use -Wno-varargs. -Wvector-operation-performance Warn if vector operation is not implemented via SIMD capabilities of the architecture. Mainly useful for the performance tuning. Vector operation can be implemented "piecewise", which means that the scalar operation is performed on every vector element; "in parallel", which means that the vector operation is implemented using scalars of wider type, which normally is more performance efficient; and "as a single scalar", which means that vector fits into a scalar type. -Wno-virtual-move-assign Suppress warnings about inheriting from a virtual base with a non-trivial C++11 move assignment operator. This is dangerous because if the virtual base is reachable along more than one path, it is moved multiple times, which can mean both objects end up in the moved-from state. If the move assignment operator is written to avoid moving from a moved- from object, this warning can be disabled. -Wvla Warn if a variable-length array is used in the code. -Wno-vla prevents the -Wpedantic warning of the variable- length array. -Wvla-larger-than=byte-size If this option is used, the compiler will warn for declarations of variable-length arrays whose size is either unbounded, or bounded by an argument that allows the array size to exceed byte-size bytes. This is similar to how -Walloca-larger-than=byte-size works, but with variable- length arrays. Note that GCC may optimize small variable-length arrays of a known value into plain arrays, so this warning may not get triggered for such arrays. -Wvla-larger-than=PTRDIFF_MAX is enabled by default but is typically only effective when -ftree-vrp is active (default for -O2 and above). See also -Walloca-larger-than=byte-size. -Wno-vla-larger-than Disable -Wvla-larger-than= warnings. The option is equivalent to -Wvla-larger-than=SIZE_MAX or larger. -Wvolatile-register-var Warn if a register variable is declared volatile. The volatile modifier does not inhibit all optimizations that may eliminate reads and/or writes to register variables. This warning is enabled by -Wall. -Wdisabled-optimization Warn if a requested optimization pass is disabled. This warning does not generally indicate that there is anything wrong with your code; it merely indicates that GCC's optimizers are unable to handle the code effectively. Often, the problem is that your code is too big or too complex; GCC refuses to optimize programs when the optimization itself is likely to take inordinate amounts of time. -Wpointer-sign (C and Objective-C only) Warn for pointer argument passing or assignment with different signedness. This option is only supported for C and Objective-C. It is implied by -Wall and by -Wpedantic, which can be disabled with -Wno-pointer-sign. -Wstack-protector This option is only active when -fstack-protector is active. It warns about functions that are not protected against stack smashing. -Woverlength-strings Warn about string constants that are longer than the "minimum maximum" length specified in the C standard. Modern compilers generally allow string constants that are much longer than the standard's minimum limit, but very portable programs should avoid using longer strings. The limit applies after string constant concatenation, and does not count the trailing NUL. In C90, the limit was 509 characters; in C99, it was raised to 4095. C++98 does not specify a normative minimum maximum, so we do not diagnose overlength strings in C++. This option is implied by -Wpedantic, and can be disabled with -Wno-overlength-strings. -Wunsuffixed-float-constants (C and Objective-C only) Issue a warning for any floating constant that does not have a suffix. When used together with -Wsystem-headers it warns about such constants in system header files. This can be useful when preparing code to use with the "FLOAT_CONST_DECIMAL64" pragma from the decimal floating- point extension to C99. -Wno-designated-init (C and Objective-C only) Suppress warnings when a positional initializer is used to initialize a structure that has been marked with the "designated_init" attribute. -Whsa Issue a warning when HSAIL cannot be emitted for the compiled function or OpenMP construct. Options for Debugging Your Program To tell GCC to emit extra information for use by a debugger, in almost all cases you need only to add -g to your other options. GCC allows you to use -g with -O. The shortcuts taken by optimized code may occasionally be surprising: some variables you declared may not exist at all; flow of control may briefly move where you did not expect it; some statements may not be executed because they compute constant results or their values are already at hand; some statements may execute in different places because they have been moved out of loops. Nevertheless it is possible to debug optimized output. This makes it reasonable to use the optimizer for programs that might have bugs. If you are not using some other optimization option, consider using -Og with -g. With no -O option at all, some compiler passes that collect information useful for debugging do not run at all, so that -Og may result in a better debugging experience. -g Produce debugging information in the operating system's native format (stabs, COFF, XCOFF, or DWARF). GDB can work with this debugging information. On most systems that use stabs format, -g enables use of extra debugging information that only GDB can use; this extra information makes debugging work better in GDB but probably makes other debuggers crash or refuse to read the program. If you want to control for certain whether to generate the extra information, use -gstabs+, -gstabs, -gxcoff+, -gxcoff, or -gvms (see below). -ggdb Produce debugging information for use by GDB. This means to use the most expressive format available (DWARF, stabs, or the native format if neither of those are supported), including GDB extensions if at all possible. -gdwarf -gdwarf-version Produce debugging information in DWARF format (if that is supported). The value of version may be either 2, 3, 4 or 5; the default version for most targets is 4. DWARF Version 5 is only experimental. Note that with DWARF Version 2, some ports require and always use some non-conflicting DWARF 3 extensions in the unwind tables. Version 4 may require GDB 7.0 and -fvar-tracking-assignments for maximum benefit. GCC no longer supports DWARF Version 1, which is substantially different than Version 2 and later. For historical reasons, some other DWARF-related options such as -fno-dwarf2-cfi-asm) retain a reference to DWARF Version 2 in their names, but apply to all currently-supported versions of DWARF. -gstabs Produce debugging information in stabs format (if that is supported), without GDB extensions. This is the format used by DBX on most BSD systems. On MIPS, Alpha and System V Release 4 systems this option produces stabs debugging output that is not understood by DBX. On System V Release 4 systems this option requires the GNU assembler. -gstabs+ Produce debugging information in stabs format (if that is supported), using GNU extensions understood only by the GNU debugger (GDB). The use of these extensions is likely to make other debuggers crash or refuse to read the program. -gxcoff Produce debugging information in XCOFF format (if that is supported). This is the format used by the DBX debugger on IBM RS/6000 systems. -gxcoff+ Produce debugging information in XCOFF format (if that is supported), using GNU extensions understood only by the GNU debugger (GDB). The use of these extensions is likely to make other debuggers crash or refuse to read the program, and may cause assemblers other than the GNU assembler (GAS) to fail with an error. -gvms Produce debugging information in Alpha/VMS debug format (if that is supported). This is the format used by DEBUG on Alpha/VMS systems. -glevel -ggdblevel -gstabslevel -gxcofflevel -gvmslevel Request debugging information and also use level to specify how much information. The default level is 2. Level 0 produces no debug information at all. Thus, -g0 negates -g. Level 1 produces minimal information, enough for making backtraces in parts of the program that you don't plan to debug. This includes descriptions of functions and external variables, and line number tables, but no information about local variables. Level 3 includes extra information, such as all the macro definitions present in the program. Some debuggers support macro expansion when you use -g3. If you use multiple -g options, with or without level numbers, the last such option is the one that is effective. -gdwarf does not accept a concatenated debug level, to avoid confusion with -gdwarf-level. Instead use an additional -glevel option to change the debug level for DWARF. -feliminate-unused-debug-symbols Produce debugging information in stabs format (if that is supported), for only symbols that are actually used. -femit-class-debug-always Instead of emitting debugging information for a C++ class in only one object file, emit it in all object files using the class. This option should be used only with debuggers that are unable to handle the way GCC normally emits debugging information for classes because using this option increases the size of debugging information by as much as a factor of two. -fno-merge-debug-strings Direct the linker to not merge together strings in the debugging information that are identical in different object files. Merging is not supported by all assemblers or linkers. Merging decreases the size of the debug information in the output file at the cost of increasing link processing time. Merging is enabled by default. -fdebug-prefix-map=old=new When compiling files residing in directory old, record debugging information describing them as if the files resided in directory new instead. This can be used to replace a build-time path with an install-time path in the debug info. It can also be used to change an absolute path to a relative path by using . for new. This can give more reproducible builds, which are location independent, but may require an extra command to tell GDB where to find the source files. See also -ffile-prefix-map. -fvar-tracking Run variable tracking pass. It computes where variables are stored at each position in code. Better debugging information is then generated (if the debugging information format supports this information). It is enabled by default when compiling with optimization (-Os, -O, -O2, ...), debugging information (-g) and the debug info format supports it. -fvar-tracking-assignments Annotate assignments to user variables early in the compilation and attempt to carry the annotations over throughout the compilation all the way to the end, in an attempt to improve debug information while optimizing. Use of -gdwarf-4 is recommended along with it. It can be enabled even if var-tracking is disabled, in which case annotations are created and maintained, but discarded at the end. By default, this flag is enabled together with -fvar-tracking, except when selective scheduling is enabled. -gsplit-dwarf Separate as much DWARF debugging information as possible into a separate output file with the extension .dwo. This option allows the build system to avoid linking files with debug information. To be useful, this option requires a debugger capable of reading .dwo files. -gdescribe-dies Add description attributes to some DWARF DIEs that have no name attribute, such as artificial variables, external references and call site parameter DIEs. -gpubnames Generate DWARF ".debug_pubnames" and ".debug_pubtypes" sections. -ggnu-pubnames Generate ".debug_pubnames" and ".debug_pubtypes" sections in a format suitable for conversion into a GDB index. This option is only useful with a linker that can produce GDB index version 7. -fdebug-types-section When using DWARF Version 4 or higher, type DIEs can be put into their own ".debug_types" section instead of making them part of the ".debug_info" section. It is more efficient to put them in a separate comdat section since the linker can then remove duplicates. But not all DWARF consumers support ".debug_types" sections yet and on some objects ".debug_types" produces larger instead of smaller debugging information. -grecord-gcc-switches -gno-record-gcc-switches This switch causes the command-line options used to invoke the compiler that may affect code generation to be appended to the DW_AT_producer attribute in DWARF debugging information. The options are concatenated with spaces separating them from each other and from the compiler version. It is enabled by default. See also -frecord-gcc-switches for another way of storing compiler options into the object file. -gstrict-dwarf Disallow using extensions of later DWARF standard version than selected with -gdwarf-version. On most targets using non-conflicting DWARF extensions from later standard versions is allowed. -gno-strict-dwarf Allow using extensions of later DWARF standard version than selected with -gdwarf-version. -gas-loc-support Inform the compiler that the assembler supports ".loc" directives. It may then use them for the assembler to generate DWARF2+ line number tables. This is generally desirable, because assembler-generated line-number tables are a lot more compact than those the compiler can generate itself. This option will be enabled by default if, at GCC configure time, the assembler was found to support such directives. -gno-as-loc-support Force GCC to generate DWARF2+ line number tables internally, if DWARF2+ line number tables are to be generated. gas-locview-support Inform the compiler that the assembler supports "view" assignment and reset assertion checking in ".loc" directives. This option will be enabled by default if, at GCC configure time, the assembler was found to support them. gno-as-locview-support Force GCC to assign view numbers internally, if -gvariable-location-views are explicitly requested. -gcolumn-info -gno-column-info Emit location column information into DWARF debugging information, rather than just file and line. This option is enabled by default. -gstatement-frontiers -gno-statement-frontiers This option causes GCC to create markers in the internal representation at the beginning of statements, and to keep them roughly in place throughout compilation, using them to guide the output of "is_stmt" markers in the line number table. This is enabled by default when compiling with optimization (-Os, -O, -O2, ...), and outputting DWARF 2 debug information at the normal level. -gvariable-location-views -gvariable-location-views=incompat5 -gno-variable-location-views Augment variable location lists with progressive view numbers implied from the line number table. This enables debug information consumers to inspect state at certain points of the program, even if no instructions associated with the corresponding source locations are present at that point. If the assembler lacks support for view numbers in line number tables, this will cause the compiler to emit the line number table, which generally makes them somewhat less compact. The augmented line number tables and location lists are fully backward-compatible, so they can be consumed by debug information consumers that are not aware of these augmentations, but they won't derive any benefit from them either. This is enabled by default when outputting DWARF 2 debug information at the normal level, as long as there is assembler support, -fvar-tracking-assignments is enabled and -gstrict-dwarf is not. When assembler support is not available, this may still be enabled, but it will force GCC to output internal line number tables, and if -ginternal-reset-location-views is not enabled, that will most certainly lead to silently mismatching location views. There is a proposed representation for view numbers that is not backward compatible with the location list format introduced in DWARF 5, that can be enabled with -gvariable-location-views=incompat5. This option may be removed in the future, is only provided as a reference implementation of the proposed representation. Debug information consumers are not expected to support this extended format, and they would be rendered unable to decode location lists using it. -ginternal-reset-location-views -gno-internal-reset-location-views Attempt to determine location views that can be omitted from location view lists. This requires the compiler to have very accurate insn length estimates, which isn't always the case, and it may cause incorrect view lists to be generated silently when using an assembler that does not support location view lists. The GNU assembler will flag any such error as a "view number mismatch". This is only enabled on ports that define a reliable estimation function. -ginline-points -gno-inline-points Generate extended debug information for inlined functions. Location view tracking markers are inserted at inlined entry points, so that address and view numbers can be computed and output in debug information. This can be enabled independently of location views, in which case the view numbers won't be output, but it can only be enabled along with statement frontiers, and it is only enabled by default if location views are enabled. -gz[=type] Produce compressed debug sections in DWARF format, if that is supported. If type is not given, the default type depends on the capabilities of the assembler and linker used. type may be one of none (don't compress debug sections), zlib (use zlib compression in ELF gABI format), or zlib-gnu (use zlib compression in traditional GNU format). If the linker doesn't support writing compressed debug sections, the option is rejected. Otherwise, if the assembler does not support them, -gz is silently ignored when producing object files. -femit-struct-debug-baseonly Emit debug information for struct-like types only when the base name of the compilation source file matches the base name of file in which the struct is defined. This option substantially reduces the size of debugging information, but at significant potential loss in type information to the debugger. See -femit-struct-debug-reduced for a less aggressive option. See -femit-struct-debug-detailed for more detailed control. This option works only with DWARF debug output. -femit-struct-debug-reduced Emit debug information for struct-like types only when the base name of the compilation source file matches the base name of file in which the type is defined, unless the struct is a template or defined in a system header. This option significantly reduces the size of debugging information, with some potential loss in type information to the debugger. See -femit-struct-debug-baseonly for a more aggressive option. See -femit-struct-debug-detailed for more detailed control. This option works only with DWARF debug output. -femit-struct-debug-detailed[=spec-list] Specify the struct-like types for which the compiler generates debug information. The intent is to reduce duplicate struct debug information between different object files within the same program. This option is a detailed version of -femit-struct-debug-reduced and -femit-struct-debug-baseonly, which serves for most needs. A specification has the syntax[dir:|ind:][ord:|gen:](any|sys|base|none) The optional first word limits the specification to structs that are used directly (dir:) or used indirectly (ind:). A struct type is used directly when it is the type of a variable, member. Indirect uses arise through pointers to structs. That is, when use of an incomplete struct is valid, the use is indirect. An example is struct one direct; struct two * indirect;. The optional second word limits the specification to ordinary structs (ord:) or generic structs (gen:). Generic structs are a bit complicated to explain. For C++, these are non- explicit specializations of template classes, or non-template classes within the above. Other programming languages have generics, but -femit-struct-debug-detailed does not yet implement them. The third word specifies the source files for those structs for which the compiler should emit debug information. The values none and any have the normal meaning. The value base means that the base of name of the file in which the type declaration appears must match the base of the name of the main compilation file. In practice, this means that when compiling foo.c, debug information is generated for types declared in that file and foo.h, but not other header files. The value sys means those types satisfying base or declared in system or compiler headers. You may need to experiment to determine the best settings for your application. The default is -femit-struct-debug-detailed=all. This option works only with DWARF debug output. -fno-dwarf2-cfi-asm Emit DWARF unwind info as compiler generated ".eh_frame" section instead of using GAS ".cfi_*" directives. -fno-eliminate-unused-debug-types Normally, when producing DWARF output, GCC avoids producing debug symbol output for types that are nowhere used in the source file being compiled. Sometimes it is useful to have GCC emit debugging information for all types declared in a compilation unit, regardless of whether or not they are actually used in that compilation unit, for example if, in the debugger, you want to cast a value to a type that is not actually used in your program (but is declared). More often, however, this results in a significant amount of wasted space. Options That Control Optimization These options control various sorts of optimizations. Without any optimization option, the compiler's goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you expect from the source code. Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program. The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them. Not all optimizations are controlled directly by a flag. Only optimizations that have a flag are listed in this section. Most optimizations are completely disabled at -O0 or if an -O level is not set on the command line, even if individual optimization flags are specified. Similarly, -Og suppresses many optimization passes. Depending on the target and how GCC was configured, a slightly different set of optimizations may be enabled at each -O level than those listed here. You can invoke GCC with -Q --help=optimizers to find out the exact set of optimizations that are enabled at each level. -O -O1 Optimize. Optimizing compilation takes somewhat more time, and a lot more memory for a large function. With -O, the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time. -O turns on the following optimization flags: -fauto-inc-dec -fbranch-count-reg -fcombine-stack-adjustments -fcompare-elim -fcprop-registers -fdce -fdefer-pop -fdelayed-branch -fdse -fforward-propagate -fguess-branch-probability -fif-conversion -fif-conversion2 -finline-functions-called-once -fipa-profile -fipa-pure-const -fipa-reference -fipa-reference-addressable -fmerge-constants -fmove-loop-invariants -fomit-frame-pointer -freorder-blocks -fshrink-wrap -fshrink-wrap-separate -fsplit-wide-types -fssa-backprop -fssa-phiopt -ftree-bit-ccp -ftree-ccp -ftree-ch -ftree-coalesce-vars -ftree-copy-prop -ftree-dce -ftree-dominator-opts -ftree-dse -ftree-forwprop -ftree-fre -ftree-phiprop -ftree-pta -ftree-scev-cprop -ftree-sink -ftree-slsr -ftree-sra -ftree-ter -funit-at-a-time -O2 Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. As compared to -O, this option increases both compilation time and the performance of the generated code. -O2 turns on all optimization flags specified by -O. It also turns on the following optimization flags: -falign-functions -falign-jumps -falign-labels -falign-loops -fcaller-saves -fcode-hoisting -fcrossjumping -fcse-follow-jumps -fcse-skip-blocks -fdelete-null-pointer-checks -fdevirtualize -fdevirtualize-speculatively -fexpensive-optimizations -fgcse -fgcse-lm -fhoist-adjacent-loads -finline-small-functions -findirect-inlining -fipa-bit-cp -fipa-cp -fipa-icf -fipa-ra -fipa-sra -fipa-vrp -fisolate-erroneous-paths-dereference -flra-remat -foptimize-sibling-calls -foptimize-strlen -fpartial-inlining -fpeephole2 -freorder-blocks-algorithm=stc -freorder-blocks-and-partition -freorder-functions -frerun-cse-after-loop -fschedule-insns -fschedule-insns2 -fsched-interblock -fsched-spec -fstore-merging -fstrict-aliasing -fthread-jumps -ftree-builtin-call-dce -ftree-pre -ftree-switch-conversion -ftree-tail-merge -ftree-vrp Please note the warning under -fgcse about invoking -O2 on programs that use computed gotos. -O3 Optimize yet more. -O3 turns on all optimizations specified by -O2 and also turns on the following optimization flags: -fgcse-after-reload -finline-functions -fipa-cp-clone -floop-interchange -floop-unroll-and-jam -fpeel-loops -fpredictive-commoning -fsplit-paths -ftree-loop-distribute-patterns -ftree-loop-distribution -ftree-loop-vectorize -ftree-partial-pre -ftree-slp-vectorize -funswitch-loops -fvect-cost-model -fversion-loops-for-strides -O0 Reduce compilation time and make debugging produce the expected results. This is the default. -Os Optimize for size. -Os enables all -O2 optimizations except those that often increase code size: -falign-functions -falign-jumps -falign-labels -falign-loops -fprefetch-loop-arrays -freorder-blocks-algorithm=stc It also enables -finline-functions, causes the compiler to tune for code size rather than execution speed, and performs further optimizations designed to reduce code size. -Ofast Disregard strict standards compliance. -Ofast enables all -O3 optimizations. It also enables optimizations that are not valid for all standard-compliant programs. It turns on -ffast-math and the Fortran-specific -fstack-arrays, unless -fmax-stack-var-size is specified, and -fno-protect-parens. -Og Optimize debugging experience. -Og should be the optimization level of choice for the standard edit-compile- debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience. It is a better choice than -O0 for producing debuggable code because some compiler passes that collect debug information are disabled at -O0. Like -O0, -Og completely disables a number of optimization passes so that individual options controlling them have no effect. Otherwise -Og enables all -O1 optimization flags except for those that may interfere with debugging: -fbranch-count-reg -fdelayed-branch -fif-conversion -fif-conversion2 -finline-functions-called-once -fmove-loop-invariants -fssa-phiopt -ftree-bit-ccp -ftree-pta -ftree-sra If you use multiple -O options, with or without level numbers, the last such option is the one that is effective. Options of the form -fflag specify machine-independent flags. Most flags have both positive and negative forms; the negative form of -ffoo is -fno-foo. In the table below, only one of the forms is listed---the one you typically use. You can figure out the other form by either removing no- or adding it. The following options control specific optimizations. They are either activated by -O options or are related to ones that are. You can use the following flags in the rare cases when "fine- tuning" of optimizations to be performed is desired. -fno-defer-pop For machines that must pop arguments after a function call, always pop the arguments as soon as each function returns. At levels -O1 and higher, -fdefer-pop is the default; this allows the compiler to let arguments accumulate on the stack for several function calls and pop them all at once. -fforward-propagate Perform a forward propagation pass on RTL. The pass tries to combine two instructions and checks if the result can be simplified. If loop unrolling is active, two passes are performed and the second is scheduled after loop unrolling. This option is enabled by default at optimization levels -O, -O2, -O3, -Os. -ffp-contract=style -ffp-contract=off disables floating-point expression contraction. -ffp-contract=fast enables floating-point expression contraction such as forming of fused multiply-add operations if the target has native support for them. -ffp-contract=on enables floating-point expression contraction if allowed by the language standard. This is currently not implemented and treated equal to -ffp-contract=off. The default is -ffp-contract=fast. -fomit-frame-pointer Omit the frame pointer in functions that don't need one. This avoids the instructions to save, set up and restore the frame pointer; on many targets it also makes an extra register available. On some targets this flag has no effect because the standard calling sequence always uses a frame pointer, so it cannot be omitted. Note that -fno-omit-frame-pointer doesn't guarantee the frame pointer is used in all functions. Several targets always omit the frame pointer in leaf functions. Enabled by default at -O and higher. -foptimize-sibling-calls Optimize sibling and tail recursive calls. Enabled at levels -O2, -O3, -Os. -foptimize-strlen Optimize various standard C string functions (e.g. "strlen", "strchr" or "strcpy") and their "_FORTIFY_SOURCE" counterparts into faster alternatives. Enabled at levels -O2, -O3. -fno-inline Do not expand any functions inline apart from those marked with the "always_inline" attribute. This is the default when not optimizing. Single functions can be exempted from inlining by marking them with the "noinline" attribute. -finline-small-functions Integrate functions into their callers when their body is smaller than expected function call code (so overall size of program gets smaller). The compiler heuristically decides which functions are simple enough to be worth integrating in this way. This inlining applies to all functions, even those not declared inline. Enabled at levels -O2, -O3, -Os. -findirect-inlining Inline also indirect calls that are discovered to be known at compile time thanks to previous inlining. This option has any effect only when inlining itself is turned on by the -finline-functions or -finline-small-functions options. Enabled at levels -O2, -O3, -Os. -finline-functions Consider all functions for inlining, even if they are not declared inline. The compiler heuristically decides which functions are worth integrating in this way. If all calls to a given function are integrated, and the function is declared "static", then the function is normally not output as assembler code in its own right. Enabled at levels -O3, -Os. Also enabled by -fprofile-use and -fauto-profile. -finline-functions-called-once Consider all "static" functions called once for inlining into their caller even if they are not marked "inline". If a call to a given function is integrated, then the function is not output as assembler code in its own right. Enabled at levels -O1, -O2, -O3 and -Os, but not -Og. -fearly-inlining Inline functions marked by "always_inline" and functions whose body seems smaller than the function call overhead early before doing -fprofile-generate instrumentation and real inlining pass. Doing so makes profiling significantly cheaper and usually inlining faster on programs having large chains of nested wrapper functions. Enabled by default. -fipa-sra Perform interprocedural scalar replacement of aggregates, removal of unused parameters and replacement of parameters passed by reference by parameters passed by value. Enabled at levels -O2, -O3 and -Os. -finline-limit=n By default, GCC limits the size of functions that can be inlined. This flag allows coarse control of this limit. n is the size of functions that can be inlined in number of pseudo instructions. Inlining is actually controlled by a number of parameters, which may be specified individually by using --param name=value. The -finline-limit=n option sets some of these parameters as follows: max-inline-insns-single is set to n/2. max-inline-insns-auto is set to n/2. See below for a documentation of the individual parameters controlling inlining and for the defaults of these parameters. Note: there may be no value to -finline-limit that results in default behavior. Note: pseudo instruction represents, in this particular context, an abstract measurement of function's size. In no way does it represent a count of assembly instructions and as such its exact meaning might change from one release to an another. -fno-keep-inline-dllexport This is a more fine-grained version of -fkeep-inline-functions, which applies only to functions that are declared using the "dllexport" attribute or declspec. -fkeep-inline-functions In C, emit "static" functions that are declared "inline" into the object file, even if the function has been inlined into all of its callers. This switch does not affect functions using the "extern inline" extension in GNU C90. In C++, emit any and all inline functions into the object file. -fkeep-static-functions Emit "static" functions into the object file, even if the function is never used. -fkeep-static-consts Emit variables declared "static const" when optimization isn't turned on, even if the variables aren't referenced. GCC enables this option by default. If you want to force the compiler to check if a variable is referenced, regardless of whether or not optimization is turned on, use the -fno-keep-static-consts option. -fmerge-constants Attempt to merge identical constants (string constants and floating-point constants) across compilation units. This option is the default for optimized compilation if the assembler and linker support it. Use -fno-merge-constants to inhibit this behavior. Enabled at levels -O, -O2, -O3, -Os. -fmerge-all-constants Attempt to merge identical constants and identical variables. This option implies -fmerge-constants. In addition to -fmerge-constants this considers e.g. even constant initialized arrays or initialized constant variables with integral or floating-point types. Languages like C or C++ require each variable, including multiple instances of the same variable in recursive calls, to have distinct locations, so using this option results in non-conforming behavior. -fmodulo-sched Perform swing modulo scheduling immediately before the first scheduling pass. This pass looks at innermost loops and reorders their instructions by overlapping different iterations. -fmodulo-sched-allow-regmoves Perform more aggressive SMS-based modulo scheduling with register moves allowed. By setting this flag certain anti- dependences edges are deleted, which triggers the generation of reg-moves based on the life-range analysis. This option is effective only with -fmodulo-sched enabled. -fno-branch-count-reg Disable the optimization pass that scans for opportunities to use "decrement and branch" instructions on a count register instead of instruction sequences that decrement a register, compare it against zero, and then branch based upon the result. This option is only meaningful on architectures that support such instructions, which include x86, PowerPC, IA-64 and S/390. Note that the -fno-branch-count-reg option doesn't remove the decrement and branch instructions from the generated instruction stream introduced by other optimization passes. The default is -fbranch-count-reg at -O1 and higher, except for -Og. -fno-function-cse Do not put function addresses in registers; make each instruction that calls a constant function contain the function's address explicitly. This option results in less efficient code, but some strange hacks that alter the assembler output may be confused by the optimizations performed when this option is not used. The default is -ffunction-cse -fno-zero-initialized-in-bss If the target supports a BSS section, GCC by default puts variables that are initialized to zero into BSS. This can save space in the resulting code. This option turns off this behavior because some programs explicitly rely on variables going to the data section---e.g., so that the resulting executable can find the beginning of that section and/or make assumptions based on that. The default is -fzero-initialized-in-bss. -fthread-jumps Perform optimizations that check to see if a jump branches to a location where another comparison subsumed by the first is found. If so, the first branch is redirected to either the destination of the second branch or a point immediately following it, depending on whether the condition is known to be true or false. Enabled at levels -O2, -O3, -Os. -fsplit-wide-types When using a type that occupies multiple registers, such as "long long" on a 32-bit system, split the registers apart and allocate them independently. This normally generates better code for those types, but may make debugging more difficult. Enabled at levels -O, -O2, -O3, -Os. -fcse-follow-jumps In common subexpression elimination (CSE), scan through jump instructions when the target of the jump is not reached by any other path. For example, when CSE encounters an "if" statement with an "else" clause, CSE follows the jump when the condition tested is false. Enabled at levels -O2, -O3, -Os. -fcse-skip-blocks This is similar to -fcse-follow-jumps, but causes CSE to follow jumps that conditionally skip over blocks. When CSE encounters a simple "if" statement with no else clause, -fcse-skip-blocks causes CSE to follow the jump around the body of the "if". Enabled at levels -O2, -O3, -Os. -frerun-cse-after-loop Re-run common subexpression elimination after loop optimizations are performed. Enabled at levels -O2, -O3, -Os. -fgcse Perform a global common subexpression elimination pass. This pass also performs global constant and copy propagation. Note: When compiling a program using computed gotos, a GCC extension, you may get better run-time performance if you disable the global common subexpression elimination pass by adding -fno-gcse to the command line. Enabled at levels -O2, -O3, -Os. -fgcse-lm When -fgcse-lm is enabled, global common subexpression elimination attempts to move loads that are only killed by stores into themselves. This allows a loop containing a load/store sequence to be changed to a load outside the loop, and a copy/store within the loop. Enabled by default when -fgcse is enabled. -fgcse-sm When -fgcse-sm is enabled, a store motion pass is run after global common subexpression elimination. This pass attempts to move stores out of loops. When used in conjunction with -fgcse-lm, loops containing a load/store sequence can be changed to a load before the loop and a store after the loop. Not enabled at any optimization level. -fgcse-las When -fgcse-las is enabled, the global common subexpression elimination pass eliminates redundant loads that come after stores to the same memory location (both partial and full redundancies). Not enabled at any optimization level. -fgcse-after-reload When -fgcse-after-reload is enabled, a redundant load elimination pass is performed after reload. The purpose of this pass is to clean up redundant spilling. Enabled by -fprofile-use and -fauto-profile. -faggressive-loop-optimizations This option tells the loop optimizer to use language constraints to derive bounds for the number of iterations of a loop. This assumes that loop code does not invoke undefined behavior by for example causing signed integer overflows or out-of-bound array accesses. The bounds for the number of iterations of a loop are used to guide loop unrolling and peeling and loop exit test optimizations. This option is enabled by default. -funconstrained-commons This option tells the compiler that variables declared in common blocks (e.g. Fortran) may later be overridden with longer trailing arrays. This prevents certain optimizations that depend on knowing the array bounds. -fcrossjumping Perform cross-jumping transformation. This transformation unifies equivalent code and saves code size. The resulting code may or may not perform better than without cross- jumping. Enabled at levels -O2, -O3, -Os. -fauto-inc-dec Combine increments or decrements of addresses with memory accesses. This pass is always skipped on architectures that do not have instructions to support this. Enabled by default at -O and higher on architectures that support this. -fdce Perform dead code elimination (DCE) on RTL. Enabled by default at -O and higher. -fdse Perform dead store elimination (DSE) on RTL. Enabled by default at -O and higher. -fif-conversion Attempt to transform conditional jumps into branch-less equivalents. This includes use of conditional moves, min, max, set flags and abs instructions, and some tricks doable by standard arithmetics. The use of conditional execution on chips where it is available is controlled by -fif-conversion2. Enabled at levels -O, -O2, -O3, -Os, but not with -Og. -fif-conversion2 Use conditional execution (where available) to transform conditional jumps into branch-less equivalents. Enabled at levels -O, -O2, -O3, -Os, but not with -Og. -fdeclone-ctor-dtor The C++ ABI requires multiple entry points for constructors and destructors: one for a base subobject, one for a complete object, and one for a virtual destructor that calls operator delete afterwards. For a hierarchy with virtual bases, the base and complete variants are clones, which means two copies of the function. With this option, the base and complete variants are changed to be thunks that call a common implementation. Enabled by -Os. -fdelete-null-pointer-checks Assume that programs cannot safely dereference null pointers, and that no code or data element resides at address zero. This option enables simple constant folding optimizations at all optimization levels. In addition, other optimization passes in GCC use this flag to control global dataflow analyses that eliminate useless checks for null pointers; these assume that a memory access to address zero always results in a trap, so that if a pointer is checked after it has already been dereferenced, it cannot be null. Note however that in some environments this assumption is not true. Use -fno-delete-null-pointer-checks to disable this optimization for programs that depend on that behavior. This option is enabled by default on most targets. On Nios II ELF, it defaults to off. On AVR, CR16, and MSP430, this option is completely disabled. Passes that use the dataflow information are enabled independently at different optimization levels. -fdevirtualize Attempt to convert calls to virtual functions to direct calls. This is done both within a procedure and interprocedurally as part of indirect inlining (-findirect-inlining) and interprocedural constant propagation (-fipa-cp). Enabled at levels -O2, -O3, -Os. -fdevirtualize-speculatively Attempt to convert calls to virtual functions to speculative direct calls. Based on the analysis of the type inheritance graph, determine for a given call the set of likely targets. If the set is small, preferably of size 1, change the call into a conditional deciding between direct and indirect calls. The speculative calls enable more optimizations, such as inlining. When they seem useless after further optimization, they are converted back into original form. -fdevirtualize-at-ltrans Stream extra information needed for aggressive devirtualization when running the link-time optimizer in local transformation mode. This option enables more devirtualization but significantly increases the size of streamed data. For this reason it is disabled by default. -fexpensive-optimizations Perform a number of minor optimizations that are relatively expensive. Enabled at levels -O2, -O3, -Os. -free Attempt to remove redundant extension instructions. This is especially helpful for the x86-64 architecture, which implicitly zero-extends in 64-bit registers after writing to their lower 32-bit half. Enabled for Alpha, AArch64 and x86 at levels -O2, -O3, -Os. -fno-lifetime-dse In C++ the value of an object is only affected by changes within its lifetime: when the constructor begins, the object has an indeterminate value, and any changes during the lifetime of the object are dead when the object is destroyed. Normally dead store elimination will take advantage of this; if your code relies on the value of the object storage persisting beyond the lifetime of the object, you can use this flag to disable this optimization. To preserve stores before the constructor starts (e.g. because your operator new clears the object storage) but still treat the object as dead after the destructor you, can use -flifetime-dse=1. The default behavior can be explicitly selected with -flifetime-dse=2. -flifetime-dse=0 is equivalent to -fno-lifetime-dse. -flive-range-shrinkage Attempt to decrease register pressure through register live range shrinkage. This is helpful for fast processors with small or moderate size register sets. -fira-algorithm=algorithm Use the specified coloring algorithm for the integrated register allocator. The algorithm argument can be priority, which specifies Chow's priority coloring, or CB, which specifies Chaitin-Briggs coloring. Chaitin-Briggs coloring is not implemented for all architectures, but for those targets that do support it, it is the default because it generates better code. -fira-region=region Use specified regions for the integrated register allocator. The region argument should be one of the following: all Use all loops as register allocation regions. This can give the best results for machines with a small and/or irregular register set. mixed Use all loops except for loops with small register pressure as the regions. This value usually gives the best results in most cases and for most architectures, and is enabled by default when compiling with optimization for speed (-O, -O2, ...). one Use all functions as a single region. This typically results in the smallest code size, and is enabled by default for -Os or -O0. -fira-hoist-pressure Use IRA to evaluate register pressure in the code hoisting pass for decisions to hoist expressions. This option usually results in smaller code, but it can slow the compiler down. This option is enabled at level -Os for all targets. -fira-loop-pressure Use IRA to evaluate register pressure in loops for decisions to move loop invariants. This option usually results in generation of faster and smaller code on machines with large register files (>= 32 registers), but it can slow the compiler down. This option is enabled at level -O3 for some targets. -fno-ira-share-save-slots Disable sharing of stack slots used for saving call-used hard registers living through a call. Each hard register gets a separate stack slot, and as a result function stack frames are larger. -fno-ira-share-spill-slots Disable sharing of stack slots allocated for pseudo- registers. Each pseudo-register that does not get a hard register gets a separate stack slot, and as a result function stack frames are larger. -flra-remat Enable CFG-sensitive rematerialization in LRA. Instead of loading values of spilled pseudos, LRA tries to rematerialize (recalculate) values if it is profitable. Enabled at levels -O2, -O3, -Os. -fdelayed-branch If supported for the target machine, attempt to reorder instructions to exploit instruction slots available after delayed branch instructions. Enabled at levels -O, -O2, -O3, -Os, but not at -Og. -fschedule-insns If supported for the target machine, attempt to reorder instructions to eliminate execution stalls due to required data being unavailable. This helps machines that have slow floating point or memory load instructions by allowing other instructions to be issued until the result of the load or floating-point instruction is required. Enabled at levels -O2, -O3. -fschedule-insns2 Similar to -fschedule-insns, but requests an additional pass of instruction scheduling after register allocation has been done. This is especially useful on machines with a relatively small number of registers and where memory load instructions take more than one cycle. Enabled at levels -O2, -O3, -Os. -fno-sched-interblock Disable instruction scheduling across basic blocks, which is normally enabled when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher. -fno-sched-spec Disable speculative motion of non-load instructions, which is normally enabled when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher. -fsched-pressure Enable register pressure sensitive insn scheduling before register allocation. This only makes sense when scheduling before register allocation is enabled, i.e. with -fschedule-insns or at -O2 or higher. Usage of this option can improve the generated code and decrease its size by preventing register pressure increase above the number of available hard registers and subsequent spills in register allocation. -fsched-spec-load Allow speculative motion of some load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher. -fsched-spec-load-dangerous Allow speculative motion of more load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher. -fsched-stalled-insns -fsched-stalled-insns=n Define how many insns (if any) can be moved prematurely from the queue of stalled insns into the ready list during the second scheduling pass. -fno-sched-stalled-insns means that no insns are moved prematurely, -fsched-stalled-insns=0 means there is no limit on how many queued insns can be moved prematurely. -fsched-stalled-insns without a value is equivalent to -fsched-stalled-insns=1. -fsched-stalled-insns-dep -fsched-stalled-insns-dep=n Define how many insn groups (cycles) are examined for a dependency on a stalled insn that is a candidate for premature removal from the queue of stalled insns. This has an effect only during the second scheduling pass, and only if -fsched-stalled-insns is used. -fno-sched-stalled-insns-dep is equivalent to -fsched-stalled-insns-dep=0. -fsched-stalled-insns-dep without a value is equivalent to -fsched-stalled-insns-dep=1. -fsched2-use-superblocks When scheduling after register allocation, use superblock scheduling. This allows motion across basic block boundaries, resulting in faster schedules. This option is experimental, as not all machine descriptions used by GCC model the CPU closely enough to avoid unreliable results from the algorithm. This only makes sense when scheduling after register allocation, i.e. with -fschedule-insns2 or at -O2 or higher. -fsched-group-heuristic Enable the group heuristic in the scheduler. This heuristic favors the instruction that belongs to a schedule group. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-critical-path-heuristic Enable the critical-path heuristic in the scheduler. This heuristic favors instructions on the critical path. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-spec-insn-heuristic Enable the speculative instruction heuristic in the scheduler. This heuristic favors speculative instructions with greater dependency weakness. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-rank-heuristic Enable the rank heuristic in the scheduler. This heuristic favors the instruction belonging to a basic block with greater size or frequency. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-last-insn-heuristic Enable the last-instruction heuristic in the scheduler. This heuristic favors the instruction that is less dependent on the last instruction scheduled. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-dep-count-heuristic Enable the dependent-count heuristic in the scheduler. This heuristic favors the instruction that has more instructions depending on it. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -freschedule-modulo-scheduled-loops Modulo scheduling is performed before traditional scheduling. If a loop is modulo scheduled, later scheduling passes may change its schedule. Use this option to control that behavior. -fselective-scheduling Schedule instructions using selective scheduling algorithm. Selective scheduling runs instead of the first scheduler pass. -fselective-scheduling2 Schedule instructions using selective scheduling algorithm. Selective scheduling runs instead of the second scheduler pass. -fsel-sched-pipelining Enable software pipelining of innermost loops during selective scheduling. This option has no effect unless one of -fselective-scheduling or -fselective-scheduling2 is turned on. -fsel-sched-pipelining-outer-loops When pipelining loops during selective scheduling, also pipeline outer loops. This option has no effect unless -fsel-sched-pipelining is turned on. -fsemantic-interposition Some object formats, like ELF, allow interposing of symbols by the dynamic linker. This means that for symbols exported from the DSO, the compiler cannot perform interprocedural propagation, inlining and other optimizations in anticipation that the function or variable in question may change. While this feature is useful, for example, to rewrite memory allocation functions by a debugging implementation, it is expensive in the terms of code quality. With -fno-semantic-interposition the compiler assumes that if interposition happens for functions the overwriting function will have precisely the same semantics (and side effects). Similarly if interposition happens for variables, the constructor of the variable will be the same. The flag has no effect for functions explicitly declared inline (where it is never allowed for interposition to change semantics) and for symbols explicitly declared weak. -fshrink-wrap Emit function prologues only before parts of the function that need it, rather than at the top of the function. This flag is enabled by default at -O and higher. -fshrink-wrap-separate Shrink-wrap separate parts of the prologue and epilogue separately, so that those parts are only executed when needed. This option is on by default, but has no effect unless -fshrink-wrap is also turned on and the target supports this. -fcaller-saves Enable allocation of values to registers that are clobbered by function calls, by emitting extra instructions to save and restore the registers around such calls. Such allocation is done only when it seems to result in better code. This option is always enabled by default on certain machines, usually those which have no call-preserved registers to use instead. Enabled at levels -O2, -O3, -Os. -fcombine-stack-adjustments Tracks stack adjustments (pushes and pops) and stack memory references and then tries to find ways to combine them. Enabled by default at -O1 and higher. -fipa-ra Use caller save registers for allocation if those registers are not used by any called function. In that case it is not necessary to save and restore them around calls. This is only possible if called functions are part of same compilation unit as current function and they are compiled before it. Enabled at levels -O2, -O3, -Os, however the option is disabled if generated code will be instrumented for profiling (-p, or -pg) or if callee's register usage cannot be known exactly (this happens on targets that do not expose prologues and epilogues in RTL). -fconserve-stack Attempt to minimize stack usage. The compiler attempts to use less stack space, even if that makes the program slower. This option implies setting the large-stack-frame parameter to 100 and the large-stack-frame-growth parameter to 400. -ftree-reassoc Perform reassociation on trees. This flag is enabled by default at -O and higher. -fcode-hoisting Perform code hoisting. Code hoisting tries to move the evaluation of expressions executed on all paths to the function exit as early as possible. This is especially useful as a code size optimization, but it often helps for code speed as well. This flag is enabled by default at -O2 and higher. -ftree-pre Perform partial redundancy elimination (PRE) on trees. This flag is enabled by default at -O2 and -O3. -ftree-partial-pre Make partial redundancy elimination (PRE) more aggressive. This flag is enabled by default at -O3. -ftree-forwprop Perform forward propagation on trees. This flag is enabled by default at -O and higher. -ftree-fre Perform full redundancy elimination (FRE) on trees. The difference between FRE and PRE is that FRE only considers expressions that are computed on all paths leading to the redundant computation. This analysis is faster than PRE, though it exposes fewer redundancies. This flag is enabled by default at -O and higher. -ftree-phiprop Perform hoisting of loads from conditional pointers on trees. This pass is enabled by default at -O and higher. -fhoist-adjacent-loads Speculatively hoist loads from both branches of an if-then- else if the loads are from adjacent locations in the same structure and the target architecture has a conditional move instruction. This flag is enabled by default at -O2 and higher. -ftree-copy-prop Perform copy propagation on trees. This pass eliminates unnecessary copy operations. This flag is enabled by default at -O and higher. -fipa-pure-const Discover which functions are pure or constant. Enabled by default at -O and higher. -fipa-reference Discover which static variables do not escape the compilation unit. Enabled by default at -O and higher. -fipa-reference-addressable Discover read-only, write-only and non-addressable static variables. Enabled by default at -O and higher. -fipa-stack-alignment Reduce stack alignment on call sites if possible. Enabled by default. -fipa-pta Perform interprocedural pointer analysis and interprocedural modification and reference analysis. This option can cause excessive memory and compile-time usage on large compilation units. It is not enabled by default at any optimization level. -fipa-profile Perform interprocedural profile propagation. The functions called only from cold functions are marked as cold. Also functions executed once (such as "cold", "noreturn", static constructors or destructors) are identified. Cold functions and loop less parts of functions executed once are then optimized for size. Enabled by default at -O and higher. -fipa-cp Perform interprocedural constant propagation. This optimization analyzes the program to determine when values passed to functions are constants and then optimizes accordingly. This optimization can substantially increase performance if the application has constants passed to functions. This flag is enabled by default at -O2, -Os and -O3. It is also enabled by -fprofile-use and -fauto-profile. -fipa-cp-clone Perform function cloning to make interprocedural constant propagation stronger. When enabled, interprocedural constant propagation performs function cloning when externally visible function can be called with constant arguments. Because this optimization can create multiple copies of functions, it may significantly increase code size (see --param ipcp-unit-growth=value). This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -fipa-bit-cp When enabled, perform interprocedural bitwise constant propagation. This flag is enabled by default at -O2 and by -fprofile-use and -fauto-profile. It requires that -fipa-cp is enabled. -fipa-vrp When enabled, perform interprocedural propagation of value ranges. This flag is enabled by default at -O2. It requires that -fipa-cp is enabled. -fipa-icf Perform Identical Code Folding for functions and read-only variables. The optimization reduces code size and may disturb unwind stacks by replacing a function by equivalent one with a different name. The optimization works more effectively with link-time optimization enabled. Although the behavior is similar to the Gold Linker's ICF optimization, GCC ICF works on different levels and thus the optimizations are not same - there are equivalences that are found only by GCC and equivalences found only by Gold. This flag is enabled by default at -O2 and -Os. -flive-patching=level Control GCC's optimizations to produce output suitable for live-patching. If the compiler's optimization uses a function's body or information extracted from its body to optimize/change another function, the latter is called an impacted function of the former. If a function is patched, its impacted functions should be patched too. The impacted functions are determined by the compiler's interprocedural optimizations. For example, a caller is impacted when inlining a function into its caller, cloning a function and changing its caller to call this new clone, or extracting a function's pureness/constness information to optimize its direct or indirect callers, etc. Usually, the more IPA optimizations enabled, the larger the number of impacted functions for each function. In order to control the number of impacted functions and more easily compute the list of impacted function, IPA optimizations can be partially enabled at two different levels. The level argument should be one of the following: inline-clone Only enable inlining and cloning optimizations, which includes inlining, cloning, interprocedural scalar replacement of aggregates and partial inlining. As a result, when patching a function, all its callers and its clones' callers are impacted, therefore need to be patched as well. -flive-patching=inline-clone disables the following optimization flags: -fwhole-program -fipa-pta -fipa-reference -fipa-ra -fipa-icf -fipa-icf-functions -fipa-icf-variables -fipa-bit-cp -fipa-vrp -fipa-pure-const -fipa-reference-addressable -fipa-stack-alignment inline-only-static Only enable inlining of static functions. As a result, when patching a static function, all its callers are impacted and so need to be patched as well. In addition to all the flags that -flive-patching=inline-clone disables, -flive-patching=inline-only-static disables the following additional optimization flags: -fipa-cp-clone -fipa-sra -fpartial-inlining -fipa-cp When -flive-patching is specified without any value, the default value is inline-clone. This flag is disabled by default. Note that -flive-patching is not supported with link-time optimization (-flto). -fisolate-erroneous-paths-dereference Detect paths that trigger erroneous or undefined behavior due to dereferencing a null pointer. Isolate those paths from the main control flow and turn the statement with erroneous or undefined behavior into a trap. This flag is enabled by default at -O2 and higher and depends on -fdelete-null-pointer-checks also being enabled. -fisolate-erroneous-paths-attribute Detect paths that trigger erroneous or undefined behavior due to a null value being used in a way forbidden by a "returns_nonnull" or "nonnull" attribute. Isolate those paths from the main control flow and turn the statement with erroneous or undefined behavior into a trap. This is not currently enabled, but may be enabled by -O2 in the future. -ftree-sink Perform forward store motion on trees. This flag is enabled by default at -O and higher. -ftree-bit-ccp Perform sparse conditional bit constant propagation on trees and propagate pointer alignment information. This pass only operates on local scalar variables and is enabled by default at -O1 and higher, except for -Og. It requires that -ftree-ccp is enabled. -ftree-ccp Perform sparse conditional constant propagation (CCP) on trees. This pass only operates on local scalar variables and is enabled by default at -O and higher. -fssa-backprop Propagate information about uses of a value up the definition chain in order to simplify the definitions. For example, this pass strips sign operations if the sign of a value never matters. The flag is enabled by default at -O and higher. -fssa-phiopt Perform pattern matching on SSA PHI nodes to optimize conditional code. This pass is enabled by default at -O1 and higher, except for -Og. -ftree-switch-conversion Perform conversion of simple initializations in a switch to initializations from a scalar array. This flag is enabled by default at -O2 and higher. -ftree-tail-merge Look for identical code sequences. When found, replace one with a jump to the other. This optimization is known as tail merging or cross jumping. This flag is enabled by default at -O2 and higher. The compilation time in this pass can be limited using max-tail-merge-comparisons parameter and max- tail-merge-iterations parameter. -ftree-dce Perform dead code elimination (DCE) on trees. This flag is enabled by default at -O and higher. -ftree-builtin-call-dce Perform conditional dead code elimination (DCE) for calls to built-in functions that may set "errno" but are otherwise free of side effects. This flag is enabled by default at -O2 and higher if -Os is not also specified. -ftree-dominator-opts Perform a variety of simple scalar cleanups (constant/copy propagation, redundancy elimination, range propagation and expression simplification) based on a dominator tree traversal. This also performs jump threading (to reduce jumps to jumps). This flag is enabled by default at -O and higher. -ftree-dse Perform dead store elimination (DSE) on trees. A dead store is a store into a memory location that is later overwritten by another store without any intervening loads. In this case the earlier store can be deleted. This flag is enabled by default at -O and higher. -ftree-ch Perform loop header copying on trees. This is beneficial since it increases effectiveness of code motion optimizations. It also saves one jump. This flag is enabled by default at -O and higher. It is not enabled for -Os, since it usually increases code size. -ftree-loop-optimize Perform loop optimizations on trees. This flag is enabled by default at -O and higher. -ftree-loop-linear -floop-strip-mine -floop-block Perform loop nest optimizations. Same as -floop-nest-optimize. To use this code transformation, GCC has to be configured with --with-isl to enable the Graphite loop transformation infrastructure. -fgraphite-identity Enable the identity transformation for graphite. For every SCoP we generate the polyhedral representation and transform it back to gimple. Using -fgraphite-identity we can check the costs or benefits of the GIMPLE -> GRAPHITE -> GIMPLE transformation. Some minimal optimizations are also performed by the code generator isl, like index splitting and dead code elimination in loops. -floop-nest-optimize Enable the isl based loop nest optimizer. This is a generic loop nest optimizer based on the Pluto optimization algorithms. It calculates a loop structure optimized for data-locality and parallelism. This option is experimental. -floop-parallelize-all Use the Graphite data dependence analysis to identify loops that can be parallelized. Parallelize all the loops that can be analyzed to not contain loop carried dependences without checking that it is profitable to parallelize the loops. -ftree-coalesce-vars While transforming the program out of the SSA representation, attempt to reduce copying by coalescing versions of different user-defined variables, instead of just compiler temporaries. This may severely limit the ability to debug an optimized program compiled with -fno-var-tracking-assignments. In the negated form, this flag prevents SSA coalescing of user variables. This option is enabled by default if optimization is enabled, and it does very little otherwise. -ftree-loop-if-convert Attempt to transform conditional jumps in the innermost loops to branch-less equivalents. The intent is to remove control- flow from the innermost loops in order to improve the ability of the vectorization pass to handle these loops. This is enabled by default if vectorization is enabled. -ftree-loop-distribution Perform loop distribution. This flag can improve cache performance on big loop bodies and allow further loop optimizations, like parallelization or vectorization, to take place. For example, the loop DO I = 1, N A(I) = B(I) + C D(I) = E(I) * F ENDDO is transformed to DO I = 1, N A(I) = B(I) + C ENDDO DO I = 1, N D(I) = E(I) * F ENDDO This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -ftree-loop-distribute-patterns Perform loop distribution of patterns that can be code generated with calls to a library. This flag is enabled by default at -O3, and by -fprofile-use and -fauto-profile. This pass distributes the initialization loops and generates a call to memset zero. For example, the loop DO I = 1, N A(I) = 0 B(I) = A(I) + I ENDDO is transformed to DO I = 1, N A(I) = 0 ENDDO DO I = 1, N B(I) = A(I) + I ENDDO and the initialization loop is transformed into a call to memset zero. This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -floop-interchange Perform loop interchange outside of graphite. This flag can improve cache performance on loop nest and allow further loop optimizations, like vectorization, to take place. For example, the loop for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) for (int k = 0; k < N; k++) c[i][j] = c[i][j] + a[i][k]*b[k][j]; is transformed to for (int i = 0; i < N; i++) for (int k = 0; k < N; k++) for (int j = 0; j < N; j++) c[i][j] = c[i][j] + a[i][k]*b[k][j]; This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -floop-unroll-and-jam Apply unroll and jam transformations on feasible loops. In a loop nest this unrolls the outer loop by some factor and fuses the resulting multiple inner loops. This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -ftree-loop-im Perform loop invariant motion on trees. This pass moves only invariants that are hard to handle at RTL level (function calls, operations that expand to nontrivial sequences of insns). With -funswitch-loops it also moves operands of conditions that are invariant out of the loop, so that we can use just trivial invariantness analysis in loop unswitching. The pass also includes store motion. -ftree-loop-ivcanon Create a canonical counter for number of iterations in loops for which determining number of iterations requires complicated analysis. Later optimizations then may determine the number easily. Useful especially in connection with unrolling. -ftree-scev-cprop Perform final value replacement. If a variable is modified in a loop in such a way that its value when exiting the loop can be determined using only its initial value and the number of loop iterations, replace uses of the final value by such a computation, provided it is sufficiently cheap. This reduces data dependencies and may allow further simplifications. Enabled by default at -O and higher. -fivopts Perform induction variable optimizations (strength reduction, induction variable merging and induction variable elimination) on trees. -ftree-parallelize-loops=n Parallelize loops, i.e., split their iteration space to run in n threads. This is only possible for loops whose iterations are independent and can be arbitrarily reordered. The optimization is only profitable on multiprocessor machines, for loops that are CPU-intensive, rather than constrained e.g. by memory bandwidth. This option implies -pthread, and thus is only supported on targets that have support for -pthread. -ftree-pta Perform function-local points-to analysis on trees. This flag is enabled by default at -O1 and higher, except for -Og. -ftree-sra Perform scalar replacement of aggregates. This pass replaces structure references with scalars to prevent committing structures to memory too early. This flag is enabled by default at -O1 and higher, except for -Og. -fstore-merging Perform merging of narrow stores to consecutive memory addresses. This pass merges contiguous stores of immediate values narrower than a word into fewer wider stores to reduce the number of instructions. This is enabled by default at -O2 and higher as well as -Os. -ftree-ter Perform temporary expression replacement during the SSA->normal phase. Single use/single def temporaries are replaced at their use location with their defining expression. This results in non-GIMPLE code, but gives the expanders much more complex trees to work on resulting in better RTL generation. This is enabled by default at -O and higher. -ftree-slsr Perform straight-line strength reduction on trees. This recognizes related expressions involving multiplications and replaces them by less expensive calculations when possible. This is enabled by default at -O and higher. -ftree-vectorize Perform vectorization on trees. This flag enables -ftree-loop-vectorize and -ftree-slp-vectorize if not explicitly specified. -ftree-loop-vectorize Perform loop vectorization on trees. This flag is enabled by default at -O3 and by -ftree-vectorize, -fprofile-use, and -fauto-profile. -ftree-slp-vectorize Perform basic block vectorization on trees. This flag is enabled by default at -O3 and by -ftree-vectorize, -fprofile-use, and -fauto-profile. -fvect-cost-model=model Alter the cost model used for vectorization. The model argument should be one of unlimited, dynamic or cheap. With the unlimited model the vectorized code-path is assumed to be profitable while with the dynamic model a runtime check guards the vectorized code-path to enable it only for iteration counts that will likely execute faster than when executing the original scalar loop. The cheap model disables vectorization of loops where doing so would be cost prohibitive for example due to required runtime checks for data dependence or alignment but otherwise is equal to the dynamic model. The default cost model depends on other optimization flags and is either dynamic or cheap. -fsimd-cost-model=model Alter the cost model used for vectorization of loops marked with the OpenMP simd directive. The model argument should be one of unlimited, dynamic, cheap. All values of model have the same meaning as described in -fvect-cost-model and by default a cost model defined with -fvect-cost-model is used. -ftree-vrp Perform Value Range Propagation on trees. This is similar to the constant propagation pass, but instead of values, ranges of values are propagated. This allows the optimizers to remove unnecessary range checks like array bound checks and null pointer checks. This is enabled by default at -O2 and higher. Null pointer check elimination is only done if -fdelete-null-pointer-checks is enabled. -fsplit-paths Split paths leading to loop backedges. This can improve dead code elimination and common subexpression elimination. This is enabled by default at -O3 and above. -fsplit-ivs-in-unroller Enables expression of values of induction variables in later iterations of the unrolled loop using the value in the first iteration. This breaks long dependency chains, thus improving efficiency of the scheduling passes. A combination of -fweb and CSE is often sufficient to obtain the same effect. However, that is not reliable in cases where the loop body is more complicated than a single basic block. It also does not work at all on some architectures due to restrictions in the CSE pass. This optimization is enabled by default. -fvariable-expansion-in-unroller With this option, the compiler creates multiple copies of some local variables when unrolling a loop, which can result in superior code. -fpartial-inlining Inline parts of functions. This option has any effect only when inlining itself is turned on by the -finline-functions or -finline-small-functions options. Enabled at levels -O2, -O3, -Os. -fpredictive-commoning Perform predictive commoning optimization, i.e., reusing computations (especially memory loads and stores) performed in previous iterations of loops. This option is enabled at level -O3. It is also enabled by -fprofile-use and -fauto-profile. -fprefetch-loop-arrays If supported by the target machine, generate instructions to prefetch memory to improve the performance of loops that access large arrays. This option may generate better or worse code; results are highly dependent on the structure of loops within the source code. Disabled at level -Os. -fno-printf-return-value Do not substitute constants for known return value of formatted output functions such as "sprintf", "snprintf", "vsprintf", and "vsnprintf" (but not "printf" of "fprintf"). This transformation allows GCC to optimize or even eliminate branches based on the known return value of these functions called with arguments that are either constant, or whose values are known to be in a range that makes determining the exact return value possible. For example, when -fprintf-return-value is in effect, both the branch and the body of the "if" statement (but not the call to "snprint") can be optimized away when "i" is a 32-bit or smaller integer because the return value is guaranteed to be at most 8. char buf[9]; if (snprintf (buf, "%08x", i) >= sizeof buf) ... The -fprintf-return-value option relies on other optimizations and yields best results with -O2 and above. It works in tandem with the -Wformat-overflow and -Wformat-truncation options. The -fprintf-return-value option is enabled by default. -fno-peephole -fno-peephole2 Disable any machine-specific peephole optimizations. The difference between -fno-peephole and -fno-peephole2 is in how they are implemented in the compiler; some targets use one, some use the other, a few use both. -fpeephole is enabled by default. -fpeephole2 enabled at levels -O2, -O3, -Os. -fno-guess-branch-probability Do not guess branch probabilities using heuristics. GCC uses heuristics to guess branch probabilities if they are not provided by profiling feedback (-fprofile-arcs). These heuristics are based on the control flow graph. If some branch probabilities are specified by "__builtin_expect", then the heuristics are used to guess branch probabilities for the rest of the control flow graph, taking the "__builtin_expect" info into account. The interactions between the heuristics and "__builtin_expect" can be complex, and in some cases, it may be useful to disable the heuristics so that the effects of "__builtin_expect" are easier to understand. It is also possible to specify expected probability of the expression with "__builtin_expect_with_probability" built-in function. The default is -fguess-branch-probability at levels -O, -O2, -O3, -Os. -freorder-blocks Reorder basic blocks in the compiled function in order to reduce number of taken branches and improve code locality. Enabled at levels -O, -O2, -O3, -Os. -freorder-blocks-algorithm=algorithm Use the specified algorithm for basic block reordering. The algorithm argument can be simple, which does not increase code size (except sometimes due to secondary effects like alignment), or stc, the "software trace cache" algorithm, which tries to put all often executed code together, minimizing the number of branches executed by making extra copies of code. The default is simple at levels -O, -Os, and stc at levels -O2, -O3. -freorder-blocks-and-partition In addition to reordering basic blocks in the compiled function, in order to reduce number of taken branches, partitions hot and cold basic blocks into separate sections of the assembly and .o files, to improve paging and cache locality performance. This optimization is automatically turned off in the presence of exception handling or unwind tables (on targets using setjump/longjump or target specific scheme), for linkonce sections, for functions with a user-defined section attribute and on any architecture that does not support named sections. When -fsplit-stack is used this option is not enabled by default (to avoid linker errors), but may be enabled explicitly (if using a working linker). Enabled for x86 at levels -O2, -O3, -Os. -freorder-functions Reorder functions in the object file in order to improve code locality. This is implemented by using special subsections ".text.hot" for most frequently executed functions and ".text.unlikely" for unlikely executed functions. Reordering is done by the linker so object file format must support named sections and linker must place them in a reasonable way. This option isn't effective unless you either provide profile feedback (see -fprofile-arcs for details) or manually annotate functions with "hot" or "cold" attributes. Enabled at levels -O2, -O3, -Os. -fstrict-aliasing Allow the compiler to assume the strictest aliasing rules applicable to the language being compiled. For C (and C++), this activates optimizations based on the type of expressions. In particular, an object of one type is assumed never to reside at the same address as an object of a different type, unless the types are almost the same. For example, an "unsigned int" can alias an "int", but not a "void*" or a "double". A character type may alias any other type. Pay special attention to code like this: union a_union { int i; double d; }; int f() { union a_union t; t.d = 3.0; return t.i; } The practice of reading from a different union member than the one most recently written to (called "type-punning") is common. Even with -fstrict-aliasing, type-punning is allowed, provided the memory is accessed through the union type. So, the code above works as expected. However, this code might not: int f() { union a_union t; int* ip; t.d = 3.0; ip = &t.i; return *ip; } Similarly, access by taking the address, casting the resulting pointer and dereferencing the result has undefined behavior, even if the cast uses a union type, e.g.: int f() { double d = 3.0; return ((union a_union *) &d)->i; } The -fstrict-aliasing option is enabled at levels -O2, -O3, -Os. -falign-functions -falign-functions=n -falign-functions=n:m -falign-functions=n:m:n2 -falign-functions=n:m:n2:m2 Align the start of functions to the next power-of-two greater than n, skipping up to m-1 bytes. This ensures that at least the first m bytes of the function can be fetched by the CPU without crossing an n-byte alignment boundary. If m is not specified, it defaults to n. Examples: -falign-functions=32 aligns functions to the next 32-byte boundary, -falign-functions=24 aligns to the next 32-byte boundary only if this can be done by skipping 23 bytes or less, -falign-functions=32:7 aligns to the next 32-byte boundary only if this can be done by skipping 6 bytes or less. The second pair of n2:m2 values allows you to specify a secondary alignment: -falign-functions=64:7:32:3 aligns to the next 64-byte boundary if this can be done by skipping 6 bytes or less, otherwise aligns to the next 32-byte boundary if this can be done by skipping 2 bytes or less. If m2 is not specified, it defaults to n2. Some assemblers only support this flag when n is a power of two; in that case, it is rounded up. -fno-align-functions and -falign-functions=1 are equivalent and mean that functions are not aligned. If n is not specified or is zero, use a machine-dependent default. The maximum allowed n option value is 65536. Enabled at levels -O2, -O3. -flimit-function-alignment If this option is enabled, the compiler tries to avoid unnecessarily overaligning functions. It attempts to instruct the assembler to align by the amount specified by -falign-functions, but not to skip more bytes than the size of the function. -falign-labels -falign-labels=n -falign-labels=n:m -falign-labels=n:m:n2 -falign-labels=n:m:n2:m2 Align all branch targets to a power-of-two boundary. Parameters of this option are analogous to the -falign-functions option. -fno-align-labels and -falign-labels=1 are equivalent and mean that labels are not aligned. If -falign-loops or -falign-jumps are applicable and are greater than this value, then their values are used instead. If n is not specified or is zero, use a machine-dependent default which is very likely to be 1, meaning no alignment. The maximum allowed n option value is 65536. Enabled at levels -O2, -O3. -falign-loops -falign-loops=n -falign-loops=n:m -falign-loops=n:m:n2 -falign-loops=n:m:n2:m2 Align loops to a power-of-two boundary. If the loops are executed many times, this makes up for any execution of the dummy padding instructions. Parameters of this option are analogous to the -falign-functions option. -fno-align-loops and -falign-loops=1 are equivalent and mean that loops are not aligned. The maximum allowed n option value is 65536. If n is not specified or is zero, use a machine-dependent default. Enabled at levels -O2, -O3. -falign-jumps -falign-jumps=n -falign-jumps=n:m -falign-jumps=n:m:n2 -falign-jumps=n:m:n2:m2 Align branch targets to a power-of-two boundary, for branch targets where the targets can only be reached by jumping. In this case, no dummy operations need be executed. Parameters of this option are analogous to the -falign-functions option. -fno-align-jumps and -falign-jumps=1 are equivalent and mean that loops are not aligned. If n is not specified or is zero, use a machine-dependent default. The maximum allowed n option value is 65536. Enabled at levels -O2, -O3. -funit-at-a-time This option is left for compatibility reasons. -funit-at-a-time has no effect, while -fno-unit-at-a-time implies -fno-toplevel-reorder and -fno-section-anchors. Enabled by default. -fno-toplevel-reorder Do not reorder top-level functions, variables, and "asm" statements. Output them in the same order that they appear in the input file. When this option is used, unreferenced static variables are not removed. This option is intended to support existing code that relies on a particular ordering. For new code, it is better to use attributes when possible. -ftoplevel-reorder is the default at -O1 and higher, and also at -O0 if -fsection-anchors is explicitly requested. Additionally -fno-toplevel-reorder implies -fno-section-anchors. -fweb Constructs webs as commonly used for register allocation purposes and assign each web individual pseudo register. This allows the register allocation pass to operate on pseudos directly, but also strengthens several other optimization passes, such as CSE, loop optimizer and trivial dead code remover. It can, however, make debugging impossible, since variables no longer stay in a "home register". Enabled by default with -funroll-loops. -fwhole-program Assume that the current compilation unit represents the whole program being compiled. All public functions and variables with the exception of "main" and those merged by attribute "externally_visible" become static functions and in effect are optimized more aggressively by interprocedural optimizers. This option should not be used in combination with -flto. Instead relying on a linker plugin should provide safer and more precise information. -flto[=n] This option runs the standard link-time optimizer. When invoked with source code, it generates GIMPLE (one of GCC's internal representations) and writes it to special ELF sections in the object file. When the object files are linked together, all the function bodies are read from these ELF sections and instantiated as if they had been part of the same translation unit. To use the link-time optimizer, -flto and optimization options should be specified at compile time and during the final link. It is recommended that you compile all the files participating in the same link with the same options and also specify those options at link time. For example: gcc -c -O2 -flto foo.c gcc -c -O2 -flto bar.c gcc -o myprog -flto -O2 foo.o bar.o The first two invocations to GCC save a bytecode representation of GIMPLE into special ELF sections inside foo.o and bar.o. The final invocation reads the GIMPLE bytecode from foo.o and bar.o, merges the two files into a single internal image, and compiles the result as usual. Since both foo.o and bar.o are merged into a single image, this causes all the interprocedural analyses and optimizations in GCC to work across the two files as if they were a single one. This means, for example, that the inliner is able to inline functions in bar.o into functions in foo.o and vice-versa. Another (simpler) way to enable link-time optimization is: gcc -o myprog -flto -O2 foo.c bar.c The above generates bytecode for foo.c and bar.c, merges them together into a single GIMPLE representation and optimizes them as usual to produce myprog. The important thing to keep in mind is that to enable link- time optimizations you need to use the GCC driver to perform the link step. GCC automatically performs link-time optimization if any of the objects involved were compiled with the -flto command-line option. You can always override the automatic decision to do link-time optimization by passing -fno-lto to the link command. To make whole program optimization effective, it is necessary to make certain whole program assumptions. The compiler needs to know what functions and variables can be accessed by libraries and runtime outside of the link-time optimized unit. When supported by the linker, the linker plugin (see -fuse-linker-plugin) passes information to the compiler about used and externally visible symbols. When the linker plugin is not available, -fwhole-program should be used to allow the compiler to make these assumptions, which leads to more aggressive optimization decisions. When a file is compiled with -flto without -fuse-linker-plugin, the generated object file is larger than a regular object file because it contains GIMPLE bytecodes and the usual final code (see -ffat-lto-objects. This means that object files with LTO information can be linked as normal object files; if -fno-lto is passed to the linker, no interprocedural optimizations are applied. Note that when -fno-fat-lto-objects is enabled the compile stage is faster but you cannot perform a regular, non-LTO link on them. When producing the final binary, GCC only applies link-time optimizations to those files that contain bytecode. Therefore, you can mix and match object files and libraries with GIMPLE bytecodes and final object code. GCC automatically selects which files to optimize in LTO mode and which files to link without further processing. Generally, options specified at link time override those specified at compile time, although in some cases GCC attempts to infer link-time options from the settings used to compile the input files. If you do not specify an optimization level option -O at link time, then GCC uses the highest optimization level used when compiling the object files. Note that it is generally ineffective to specify an optimization level option only at link time and not at compile time, for two reasons. First, compiling without optimization suppresses compiler passes that gather information needed for effective optimization at link time. Second, some early optimization passes can be performed only at compile time and not at link time. There are some code generation flags preserved by GCC when generating bytecodes, as they need to be used during the final link. Currently, the following options and their settings are taken from the first object file that explicitly specifies them: -fPIC, -fpic, -fpie, -fcommon, -fexceptions, -fnon-call-exceptions, -fgnu-tm and all the -m target flags. Certain ABI-changing flags are required to match in all compilation units, and trying to override this at link time with a conflicting value is ignored. This includes options such as -freg-struct-return and -fpcc-struct-return. Other options such as -ffp-contract, -fno-strict-overflow, -fwrapv, -fno-trapv or -fno-strict-aliasing are passed through to the link stage and merged conservatively for conflicting translation units. Specifically -fno-strict-overflow, -fwrapv and -fno-trapv take precedence; and for example -ffp-contract=off takes precedence over -ffp-contract=fast. You can override them at link time. When you need to pass options to the assembler via -Wa or -Xassembler make sure to either compile such translation units with -fno-lto or consistently use the same assembler options on all translation units. You can alternatively also specify assembler options at LTO link time. If LTO encounters objects with C linkage declared with incompatible types in separate translation units to be linked together (undefined behavior according to ISO C99 6.2.7), a non-fatal diagnostic may be issued. The behavior is still undefined at run time. Similar diagnostics may be raised for other languages. Another feature of LTO is that it is possible to apply interprocedural optimizations on files written in different languages: gcc -c -flto foo.c g++ -c -flto bar.cc gfortran -c -flto baz.f90 g++ -o myprog -flto -O3 foo.o bar.o baz.o -lgfortran Notice that the final link is done with g++ to get the C++ runtime libraries and -lgfortran is added to get the Fortran runtime libraries. In general, when mixing languages in LTO mode, you should use the same link command options as when mixing languages in a regular (non-LTO) compilation. If object files containing GIMPLE bytecode are stored in a library archive, say libfoo.a, it is possible to extract and use them in an LTO link if you are using a linker with plugin support. To create static libraries suitable for LTO, use gcc-ar and gcc-ranlib instead of ar and ranlib; to show the symbols of object files with GIMPLE bytecode, use gcc-nm. Those commands require that ar, ranlib and nm have been compiled with plugin support. At link time, use the flag -fuse-linker-plugin to ensure that the library participates in the LTO optimization process: gcc -o myprog -O2 -flto -fuse-linker-plugin a.o b.o -lfoo With the linker plugin enabled, the linker extracts the needed GIMPLE files from libfoo.a and passes them on to the running GCC to make them part of the aggregated GIMPLE image to be optimized. If you are not using a linker with plugin support and/or do not enable the linker plugin, then the objects inside libfoo.a are extracted and linked as usual, but they do not participate in the LTO optimization process. In order to make a static library suitable for both LTO optimization and usual linkage, compile its object files with -flto -ffat-lto-objects. Link-time optimizations do not require the presence of the whole program to operate. If the program does not require any symbols to be exported, it is possible to combine -flto and -fwhole-program to allow the interprocedural optimizers to use more aggressive assumptions which may lead to improved optimization opportunities. Use of -fwhole-program is not needed when linker plugin is active (see -fuse-linker-plugin). The current implementation of LTO makes no attempt to generate bytecode that is portable between different types of hosts. The bytecode files are versioned and there is a strict version check, so bytecode files generated in one version of GCC do not work with an older or newer version of GCC. Link-time optimization does not work well with generation of debugging information on systems other than those using a combination of ELF and DWARF. If you specify the optional n, the optimization and code generation done at link time is executed in parallel using n parallel jobs by utilizing an installed make program. The environment variable MAKE may be used to override the program used. The default value for n is 1. You can also specify -flto=jobserver to use GNU make's job server mode to determine the number of parallel jobs. This is useful when the Makefile calling GCC is already executing in parallel. You must prepend a + to the command recipe in the parent Makefile for this to work. This option likely only works if MAKE is GNU make. -flto-partition=alg Specify the partitioning algorithm used by the link-time optimizer. The value is either 1to1 to specify a partitioning mirroring the original source files or balanced to specify partitioning into equally sized chunks (whenever possible) or max to create new partition for every symbol where possible. Specifying none as an algorithm disables partitioning and streaming completely. The default value is balanced. While 1to1 can be used as an workaround for various code ordering issues, the max partitioning is intended for internal testing only. The value one specifies that exactly one partition should be used while the value none bypasses partitioning and executes the link-time optimization step directly from the WPA phase. -flto-odr-type-merging Enable streaming of mangled types names of C++ types and their unification at link time. This increases size of LTO object files, but enables diagnostics about One Definition Rule violations. -flto-compression-level=n This option specifies the level of compression used for intermediate language written to LTO object files, and is only meaningful in conjunction with LTO mode (-flto). Valid values are 0 (no compression) to 9 (maximum compression). Values outside this range are clamped to either 0 or 9. If the option is not given, a default balanced compression setting is used. -fuse-linker-plugin Enables the use of a linker plugin during link-time optimization. This option relies on plugin support in the linker, which is available in gold or in GNU ld 2.21 or newer. This option enables the extraction of object files with GIMPLE bytecode out of library archives. This improves the quality of optimization by exposing more code to the link- time optimizer. This information specifies what symbols can be accessed externally (by non-LTO object or during dynamic linking). Resulting code quality improvements on binaries (and shared libraries that use hidden visibility) are similar to -fwhole-program. See -flto for a description of the effect of this flag and how to use it. This option is enabled by default when LTO support in GCC is enabled and GCC was configured for use with a linker supporting plugins (GNU ld 2.21 or newer or gold). -ffat-lto-objects Fat LTO objects are object files that contain both the intermediate language and the object code. This makes them usable for both LTO linking and normal linking. This option is effective only when compiling with -flto and is ignored at link time. -fno-fat-lto-objects improves compilation time over plain LTO, but requires the complete toolchain to be aware of LTO. It requires a linker with linker plugin support for basic functionality. Additionally, nm, ar and ranlib need to support linker plugins to allow a full-featured build environment (capable of building static libraries etc). GCC provides the gcc-ar, gcc-nm, gcc-ranlib wrappers to pass the right options to these tools. With non fat LTO makefiles need to be modified to use them. Note that modern binutils provide plugin auto-load mechanism. Installing the linker plugin into $libdir/bfd-plugins has the same effect as usage of the command wrappers (gcc-ar, gcc-nm and gcc-ranlib). The default is -fno-fat-lto-objects on targets with linker plugin support. -fcompare-elim After register allocation and post-register allocation instruction splitting, identify arithmetic instructions that compute processor flags similar to a comparison operation based on that arithmetic. If possible, eliminate the explicit comparison operation. This pass only applies to certain targets that cannot explicitly represent the comparison operation before register allocation is complete. Enabled at levels -O, -O2, -O3, -Os. -fcprop-registers After register allocation and post-register allocation instruction splitting, perform a copy-propagation pass to try to reduce scheduling dependencies and occasionally eliminate the copy. Enabled at levels -O, -O2, -O3, -Os. -fprofile-correction Profiles collected using an instrumented binary for multi- threaded programs may be inconsistent due to missed counter updates. When this option is specified, GCC uses heuristics to correct or smooth out such inconsistencies. By default, GCC emits an error message when an inconsistent profile is detected. This option is enabled by -fauto-profile. -fprofile-use -fprofile-use=path Enable profile feedback-directed optimizations, and the following optimizations, many of which are generally profitable only with profile feedback available: -fbranch-probabilities -fprofile-values -funroll-loops -fpeel-loops -ftracer -fvpt -finline-functions -fipa-cp -fipa-cp-clone -fipa-bit-cp -fpredictive-commoning -fsplit-loops -funswitch-loops -fgcse-after-reload -ftree-loop-vectorize -ftree-slp-vectorize -fvect-cost-model=dynamic -ftree-loop-distribute-patterns -fprofile-reorder-functions Before you can use this option, you must first generate profiling information. By default, GCC emits an error message if the feedback profiles do not match the source code. This error can be turned into a warning by using -Wno-error=coverage-mismatch. Note this may result in poorly optimized code. Additionally, by default, GCC also emits a warning message if the feedback profiles do not exist (see -Wmissing-profile). If path is specified, GCC looks at the path to find the profile feedback data files. See -fprofile-dir. -fauto-profile -fauto-profile=path Enable sampling-based feedback-directed optimizations, and the following optimizations, many of which are generally profitable only with profile feedback available: -fbranch-probabilities -fprofile-values -funroll-loops -fpeel-loops -ftracer -fvpt -finline-functions -fipa-cp -fipa-cp-clone -fipa-bit-cp -fpredictive-commoning -fsplit-loops -funswitch-loops -fgcse-after-reload -ftree-loop-vectorize -ftree-slp-vectorize -fvect-cost-model=dynamic -ftree-loop-distribute-patterns -fprofile-correction path is the name of a file containing AutoFDO profile information. If omitted, it defaults to fbdata.afdo in the current directory. Producing an AutoFDO profile data file requires running your program with the perf utility on a supported GNU/Linux target system. For more information, see <https://perf.wiki.kernel.org/ >. E.g. perf record -e br_inst_retired:near_taken -b -o perf.data \ -- your_program Then use the create_gcov tool to convert the raw profile data to a format that can be used by GCC. You must also supply the unstripped binary for your program to this tool. See <https://github.com/google/autofdo >. E.g. create_gcov --binary=your_program.unstripped --profile=perf.data \ --gcov=profile.afdo The following options control compiler behavior regarding floating-point arithmetic. These options trade off between speed and correctness. All must be specifically enabled. -ffloat-store Do not store floating-point variables in registers, and inhibit other options that might change whether a floating- point value is taken from a register or memory. This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a "double" is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables. -fexcess-precision=style This option allows further control over excess precision on machines where floating-point operations occur in a format with more precision or range than the IEEE standard and interchange floating-point types. By default, -fexcess-precision=fast is in effect; this means that operations may be carried out in a wider precision than the types specified in the source if that would result in faster code, and it is unpredictable when rounding to the types specified in the source code takes place. When compiling C, if -fexcess-precision=standard is specified then excess precision follows the rules specified in ISO C99; in particular, both casts and assignments cause values to be rounded to their semantic types (whereas -ffloat-store only affects assignments). This option is enabled by default for C if a strict conformance option such as -std=c99 is used. -ffast-math enables -fexcess-precision=fast by default regardless of whether a strict conformance option is used. -fexcess-precision=standard is not implemented for languages other than C. On the x86, it has no effect if -mfpmath=sse or -mfpmath=sse+387 is specified; in the former case, IEEE semantics apply without excess precision, and in the latter, rounding is unpredictable. -ffast-math Sets the options -fno-math-errno, -funsafe-math-optimizations, -ffinite-math-only, -fno-rounding-math, -fno-signaling-nans, -fcx-limited-range and -fexcess-precision=fast. This option causes the preprocessor macro "__FAST_MATH__" to be defined. This option is not turned on by any -O option besides -Ofast since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. -fno-math-errno Do not set "errno" after calling math functions that are executed with a single instruction, e.g., "sqrt". A program that relies on IEEE exceptions for math error handling may want to use this flag for speed while maintaining IEEE arithmetic compatibility. This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. The default is -fmath-errno. On Darwin systems, the math library never sets "errno". There is therefore no reason for the compiler to consider the possibility that it might, and -fno-math-errno is the default. -funsafe-math-optimizations Allow optimizations for floating-point arithmetic that (a) assume that arguments and results are valid and (b) may violate IEEE or ANSI standards. When used at link time, it may include libraries or startup files that change the default FPU control word or other similar optimizations. This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. Enables -fno-signed-zeros, -fno-trapping-math, -fassociative-math and -freciprocal-math. The default is -fno-unsafe-math-optimizations. -fassociative-math Allow re-association of operands in series of floating-point operations. This violates the ISO C and C++ language standard by possibly changing computation result. NOTE: re- ordering may change the sign of zero as well as ignore NaNs and inhibit or create underflow or overflow (and thus cannot be used on code that relies on rounding behavior like "(x + 2**52) - 2**52". May also reorder floating-point comparisons and thus may not be used when ordered comparisons are required. This option requires that both -fno-signed-zeros and -fno-trapping-math be in effect. Moreover, it doesn't make much sense with -frounding-math. For Fortran the option is automatically enabled when both -fno-signed-zeros and -fno-trapping-math are in effect. The default is -fno-associative-math. -freciprocal-math Allow the reciprocal of a value to be used instead of dividing by the value if this enables optimizations. For example "x / y" can be replaced with "x * (1/y)", which is useful if "(1/y)" is subject to common subexpression elimination. Note that this loses precision and increases the number of flops operating on the value. The default is -fno-reciprocal-math. -ffinite-math-only Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs. This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. The default is -fno-finite-math-only. -fno-signed-zeros Allow optimizations for floating-point arithmetic that ignore the signedness of zero. IEEE arithmetic specifies the behavior of distinct +0.0 and -0.0 values, which then prohibits simplification of expressions such as x+0.0 or 0.0*x (even with -ffinite-math-only). This option implies that the sign of a zero result isn't significant. The default is -fsigned-zeros. -fno-trapping-math Compile code assuming that floating-point operations cannot generate user-visible traps. These traps include division by zero, overflow, underflow, inexact result and invalid operation. This option requires that -fno-signaling-nans be in effect. Setting this option may allow faster code if one relies on "non-stop" IEEE arithmetic, for example. This option should never be turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. The default is -ftrapping-math. -frounding-math Disable transformations and optimizations that assume default floating-point rounding behavior. This is round-to-zero for all floating point to integer conversions, and round-to- nearest for all other arithmetic truncations. This option should be specified for programs that change the FP rounding mode dynamically, or that may be executed with a non-default rounding mode. This option disables constant folding of floating-point expressions at compile time (which may be affected by rounding mode) and arithmetic transformations that are unsafe in the presence of sign-dependent rounding modes. The default is -fno-rounding-math. This option is experimental and does not currently guarantee to disable all GCC optimizations that are affected by rounding mode. Future versions of GCC may provide finer control of this setting using C99's "FENV_ACCESS" pragma. This command-line option will be used to specify the default state for "FENV_ACCESS". -fsignaling-nans Compile code assuming that IEEE signaling NaNs may generate user-visible traps during floating-point operations. Setting this option disables optimizations that may change the number of exceptions visible with signaling NaNs. This option implies -ftrapping-math. This option causes the preprocessor macro "__SUPPORT_SNAN__" to be defined. The default is -fno-signaling-nans. This option is experimental and does not currently guarantee to disable all GCC optimizations that affect signaling NaN behavior. -fno-fp-int-builtin-inexact Do not allow the built-in functions "ceil", "floor", "round" and "trunc", and their "float" and "long double" variants, to generate code that raises the "inexact" floating-point exception for noninteger arguments. ISO C99 and C11 allow these functions to raise the "inexact" exception, but ISO/IEC TS 18661-1:2014, the C bindings to IEEE 754-2008, does not allow these functions to do so. The default is -ffp-int-builtin-inexact, allowing the exception to be raised. This option does nothing unless -ftrapping-math is in effect. Even if -fno-fp-int-builtin-inexact is used, if the functions generate a call to a library function then the "inexact" exception may be raised if the library implementation does not follow TS 18661. -fsingle-precision-constant Treat floating-point constants as single precision instead of implicitly converting them to double-precision constants. -fcx-limited-range When enabled, this option states that a range reduction step is not needed when performing complex division. Also, there is no checking whether the result of a complex multiplication or division is "NaN + I*NaN", with an attempt to rescue the situation in that case. The default is -fno-cx-limited-range, but is enabled by -ffast-math. This option controls the default setting of the ISO C99 "CX_LIMITED_RANGE" pragma. Nevertheless, the option applies to all languages. -fcx-fortran-rules Complex multiplication and division follow Fortran rules. Range reduction is done as part of complex division, but there is no checking whether the result of a complex multiplication or division is "NaN + I*NaN", with an attempt to rescue the situation in that case. The default is -fno-cx-fortran-rules. The following options control optimizations that may improve performance, but are not enabled by any -O options. This section includes experimental options that may produce broken code. -fbranch-probabilities After running a program compiled with -fprofile-arcs, you can compile it a second time using -fbranch-probabilities, to improve optimizations based on the number of times each branch was taken. When a program compiled with -fprofile-arcs exits, it saves arc execution counts to a file called sourcename.gcda for each source file. The information in this data file is very dependent on the structure of the generated code, so you must use the same source code and the same optimization options for both compilations. With -fbranch-probabilities, GCC puts a REG_BR_PROB note on each JUMP_INSN and CALL_INSN. These can be used to improve optimization. Currently, they are only used in one place: in reorg.c, instead of guessing which path a branch is most likely to take, the REG_BR_PROB values are used to exactly determine which path is taken more often. Enabled by -fprofile-use and -fauto-profile. -fprofile-values If combined with -fprofile-arcs, it adds code so that some data about values of expressions in the program is gathered. With -fbranch-probabilities, it reads back the data gathered from profiling values of expressions for usage in optimizations. Enabled by -fprofile-generate, -fprofile-use, and -fauto-profile. -fprofile-reorder-functions Function reordering based on profile instrumentation collects first time of execution of a function and orders these functions in ascending order. Enabled with -fprofile-use. -fvpt If combined with -fprofile-arcs, this option instructs the compiler to add code to gather information about values of expressions. With -fbranch-probabilities, it reads back the data gathered and actually performs the optimizations based on them. Currently the optimizations include specialization of division operations using the knowledge about the value of the denominator. Enabled with -fprofile-use and -fauto-profile. -frename-registers Attempt to avoid false dependencies in scheduled code by making use of registers left over after register allocation. This optimization most benefits processors with lots of registers. Depending on the debug information format adopted by the target, however, it can make debugging impossible, since variables no longer stay in a "home register". Enabled by default with -funroll-loops. -fschedule-fusion Performs a target dependent pass over the instruction stream to schedule instructions of same type together because target machine can execute them more efficiently if they are adjacent to each other in the instruction flow. Enabled at levels -O2, -O3, -Os. -ftracer Perform tail duplication to enlarge superblock size. This transformation simplifies the control flow of the function allowing other optimizations to do a better job. Enabled by -fprofile-use and -fauto-profile. -funroll-loops Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop. -funroll-loops implies -frerun-cse-after-loop, -fweb and -frename-registers. It also turns on complete loop peeling (i.e. complete removal of loops with a small constant number of iterations). This option makes code larger, and may or may not make it run faster. Enabled by -fprofile-use and -fauto-profile. -funroll-all-loops Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This usually makes programs run more slowly. -funroll-all-loops implies the same options as -funroll-loops. -fpeel-loops Peels loops for which there is enough information that they do not roll much (from profile feedback or static analysis). It also turns on complete loop peeling (i.e. complete removal of loops with small constant number of iterations). Enabled by -O3, -fprofile-use, and -fauto-profile. -fmove-loop-invariants Enables the loop invariant motion pass in the RTL loop optimizer. Enabled at level -O1 and higher, except for -Og. -fsplit-loops Split a loop into two if it contains a condition that's always true for one side of the iteration space and false for the other. Enabled by -fprofile-use and -fauto-profile. -funswitch-loops Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition). Enabled by -fprofile-use and -fauto-profile. -fversion-loops-for-strides If a loop iterates over an array with a variable stride, create another version of the loop that assumes the stride is always one. For example: for (int i = 0; i < n; ++i) x[i * stride] = ...; becomes: if (stride == 1) for (int i = 0; i < n; ++i) x[i] = ...; else for (int i = 0; i < n; ++i) x[i * stride] = ...; This is particularly useful for assumed-shape arrays in Fortran where (for example) it allows better vectorization assuming contiguous accesses. This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -ffunction-sections -fdata-sections Place each function or data item into its own section in the output file if the target supports arbitrary sections. The name of the function or the name of the data item determines the section's name in the output file. Use these options on systems where the linker can perform optimizations to improve locality of reference in the instruction space. Most systems using the ELF object format have linkers with such optimizations. On AIX, the linker rearranges sections (CSECTs) based on the call graph. The performance impact varies. Together with a linker garbage collection (linker --gc-sections option) these options may lead to smaller statically-linked executables (after stripping). On ELF/DWARF systems these options do not degenerate the quality of the debug information. There could be issues with other object files/debug info formats. Only use these options when there are significant benefits from doing so. When you specify these options, the assembler and linker create larger object and executable files and are also slower. These options affect code generation. They prevent optimizations by the compiler and assembler using relative locations inside a translation unit since the locations are unknown until link time. An example of such an optimization is relaxing calls to short call instructions. -fbranch-target-load-optimize Perform branch target register load optimization before prologue / epilogue threading. The use of target registers can typically be exposed only during reload, thus hoisting loads out of loops and doing inter-block scheduling needs a separate optimization pass. -fbranch-target-load-optimize2 Perform branch target register load optimization after prologue / epilogue threading. -fbtr-bb-exclusive When performing branch target register load optimization, don't reuse branch target registers within any basic block. -fstdarg-opt Optimize the prologue of variadic argument functions with respect to usage of those arguments. -fsection-anchors Try to reduce the number of symbolic address calculations by using shared "anchor" symbols to address nearby objects. This transformation can help to reduce the number of GOT entries and GOT accesses on some targets. For example, the implementation of the following function "foo": static int a, b, c; int foo (void) { return a + b + c; } usually calculates the addresses of all three variables, but if you compile it with -fsection-anchors, it accesses the variables from a common anchor point instead. The effect is similar to the following pseudocode (which isn't valid C): int foo (void) { register int *xr = &x; return xr[&a - &x] + xr[&b - &x] + xr[&c - &x]; } Not all targets support this option. --param name=value In some places, GCC uses various constants to control the amount of optimization that is done. For example, GCC does not inline functions that contain more than a certain number of instructions. You can control some of these constants on the command line using the --param option. The names of specific parameters, and the meaning of the values, are tied to the internals of the compiler, and are subject to change without notice in future releases. In order to get minimal, maximal and default value of a parameter, one can use --help=param -Q options. In each case, the value is an integer. The allowable choices for name are: predictable-branch-outcome When branch is predicted to be taken with probability lower than this threshold (in percent), then it is considered well predictable. max-rtl-if-conversion-insns RTL if-conversion tries to remove conditional branches around a block and replace them with conditionally executed instructions. This parameter gives the maximum number of instructions in a block which should be considered for if-conversion. The compiler will also use other heuristics to decide whether if-conversion is likely to be profitable. max-rtl-if-conversion-predictable-cost max-rtl-if-conversion-unpredictable-cost RTL if-conversion will try to remove conditional branches around a block and replace them with conditionally executed instructions. These parameters give the maximum permissible cost for the sequence that would be generated by if-conversion depending on whether the branch is statically determined to be predictable or not. The units for this parameter are the same as those for the GCC internal seq_cost metric. The compiler will try to provide a reasonable default for this parameter using the BRANCH_COST target macro. max-crossjump-edges The maximum number of incoming edges to consider for cross-jumping. The algorithm used by -fcrossjumping is O(N^2) in the number of edges incoming to each block. Increasing values mean more aggressive optimization, making the compilation time increase with probably small improvement in executable size. min-crossjump-insns The minimum number of instructions that must be matched at the end of two blocks before cross-jumping is performed on them. This value is ignored in the case where all instructions in the block being cross-jumped from are matched. max-grow-copy-bb-insns The maximum code size expansion factor when copying basic blocks instead of jumping. The expansion is relative to a jump instruction. max-goto-duplication-insns The maximum number of instructions to duplicate to a block that jumps to a computed goto. To avoid O(N^2) behavior in a number of passes, GCC factors computed gotos early in the compilation process, and unfactors them as late as possible. Only computed jumps at the end of a basic blocks with no more than max-goto-duplication- insns are unfactored. max-delay-slot-insn-search The maximum number of instructions to consider when looking for an instruction to fill a delay slot. If more than this arbitrary number of instructions are searched, the time savings from filling the delay slot are minimal, so stop searching. Increasing values mean more aggressive optimization, making the compilation time increase with probably small improvement in execution time. max-delay-slot-live-search When trying to fill delay slots, the maximum number of instructions to consider when searching for a block with valid live register information. Increasing this arbitrarily chosen value means more aggressive optimization, increasing the compilation time. This parameter should be removed when the delay slot code is rewritten to maintain the control-flow graph. max-gcse-memory The approximate maximum amount of memory that can be allocated in order to perform the global common subexpression elimination optimization. If more memory than specified is required, the optimization is not done. max-gcse-insertion-ratio If the ratio of expression insertions to deletions is larger than this value for any expression, then RTL PRE inserts or removes the expression and thus leaves partially redundant computations in the instruction stream. max-pending-list-length The maximum number of pending dependencies scheduling allows before flushing the current state and starting over. Large functions with few branches or calls can create excessively large lists which needlessly consume memory and resources. max-modulo-backtrack-attempts The maximum number of backtrack attempts the scheduler should make when modulo scheduling a loop. Larger values can exponentially increase compilation time. max-inline-insns-single Several parameters control the tree inliner used in GCC. This number sets the maximum number of instructions (counted in GCC's internal representation) in a single function that the tree inliner considers for inlining. This only affects functions declared inline and methods implemented in a class declaration (C++). max-inline-insns-auto When you use -finline-functions (included in -O3), a lot of functions that would otherwise not be considered for inlining by the compiler are investigated. To those functions, a different (more restrictive) limit compared to functions declared inline can be applied. max-inline-insns-small This is bound applied to calls which are considered relevant with -finline-small-functions. max-inline-insns-size This is bound applied to calls which are optimized for size. Small growth may be desirable to anticipate optimization oppurtunities exposed by inlining. uninlined-function-insns Number of instructions accounted by inliner for function overhead such as function prologue and epilogue. uninlined-function-time Extra time accounted by inliner for function overhead such as time needed to execute function prologue and epilogue uninlined-thunk-insns uninlined-thunk-time Same as --param uninlined-function-insns and --param uninlined-function-time but applied to function thunks inline-min-speedup When estimated performance improvement of caller + callee runtime exceeds this threshold (in percent), the function can be inlined regardless of the limit on --param max- inline-insns-single and --param max-inline-insns-auto. large-function-insns The limit specifying really large functions. For functions larger than this limit after inlining, inlining is constrained by --param large-function-growth. This parameter is useful primarily to avoid extreme compilation time caused by non-linear algorithms used by the back end. large-function-growth Specifies maximal growth of large function caused by inlining in percents. For example, parameter value 100 limits large function growth to 2.0 times the original size. large-unit-insns The limit specifying large translation unit. Growth caused by inlining of units larger than this limit is limited by --param inline-unit-growth. For small units this might be too tight. For example, consider a unit consisting of function A that is inline and B that just calls A three times. If B is small relative to A, the growth of unit is 300\% and yet such inlining is very sane. For very large units consisting of small inlineable functions, however, the overall unit growth limit is needed to avoid exponential explosion of code size. Thus for smaller units, the size is increased to --param large-unit-insns before applying --param inline- unit-growth. inline-unit-growth Specifies maximal overall growth of the compilation unit caused by inlining. For example, parameter value 20 limits unit growth to 1.2 times the original size. Cold functions (either marked cold via an attribute or by profile feedback) are not accounted into the unit size. ipcp-unit-growth Specifies maximal overall growth of the compilation unit caused by interprocedural constant propagation. For example, parameter value 10 limits unit growth to 1.1 times the original size. large-stack-frame The limit specifying large stack frames. While inlining the algorithm is trying to not grow past this limit too much. large-stack-frame-growth Specifies maximal growth of large stack frames caused by inlining in percents. For example, parameter value 1000 limits large stack frame growth to 11 times the original size. max-inline-insns-recursive max-inline-insns-recursive-auto Specifies the maximum number of instructions an out-of- line copy of a self-recursive inline function can grow into by performing recursive inlining. --param max-inline-insns-recursive applies to functions declared inline. For functions not declared inline, recursive inlining happens only when -finline-functions (included in -O3) is enabled; --param max-inline-insns- recursive-auto applies instead. max-inline-recursive-depth max-inline-recursive-depth-auto Specifies the maximum recursion depth used for recursive inlining. --param max-inline-recursive-depth applies to functions declared inline. For functions not declared inline, recursive inlining happens only when -finline-functions (included in -O3) is enabled; --param max-inline- recursive-depth-auto applies instead. min-inline-recursive-probability Recursive inlining is profitable only for function having deep recursion in average and can hurt for function having little recursion depth by increasing the prologue size or complexity of function body to other optimizers. When profile feedback is available (see -fprofile-generate) the actual recursion depth can be guessed from the probability that function recurses via a given call expression. This parameter limits inlining only to call expressions whose probability exceeds the given threshold (in percents). early-inlining-insns Specify growth that the early inliner can make. In effect it increases the amount of inlining for code having a large abstraction penalty. max-early-inliner-iterations Limit of iterations of the early inliner. This basically bounds the number of nested indirect calls the early inliner can resolve. Deeper chains are still handled by late inlining. comdat-sharing-probability Probability (in percent) that C++ inline function with comdat visibility are shared across multiple compilation units. profile-func-internal-id A parameter to control whether to use function internal id in profile database lookup. If the value is 0, the compiler uses an id that is based on function assembler name and filename, which makes old profile data more tolerant to source changes such as function reordering etc. min-vect-loop-bound The minimum number of iterations under which loops are not vectorized when -ftree-vectorize is used. The number of iterations after vectorization needs to be greater than the value specified by this option to allow vectorization. gcse-cost-distance-ratio Scaling factor in calculation of maximum distance an expression can be moved by GCSE optimizations. This is currently supported only in the code hoisting pass. The bigger the ratio, the more aggressive code hoisting is with simple expressions, i.e., the expressions that have cost less than gcse-unrestricted-cost. Specifying 0 disables hoisting of simple expressions. gcse-unrestricted-cost Cost, roughly measured as the cost of a single typical machine instruction, at which GCSE optimizations do not constrain the distance an expression can travel. This is currently supported only in the code hoisting pass. The lesser the cost, the more aggressive code hoisting is. Specifying 0 allows all expressions to travel unrestricted distances. max-hoist-depth The depth of search in the dominator tree for expressions to hoist. This is used to avoid quadratic behavior in hoisting algorithm. The value of 0 does not limit on the search, but may slow down compilation of huge functions. max-tail-merge-comparisons The maximum amount of similar bbs to compare a bb with. This is used to avoid quadratic behavior in tree tail merging. max-tail-merge-iterations The maximum amount of iterations of the pass over the function. This is used to limit compilation time in tree tail merging. store-merging-allow-unaligned Allow the store merging pass to introduce unaligned stores if it is legal to do so. max-stores-to-merge The maximum number of stores to attempt to merge into wider stores in the store merging pass. max-unrolled-insns The maximum number of instructions that a loop may have to be unrolled. If a loop is unrolled, this parameter also determines how many times the loop code is unrolled. max-average-unrolled-insns The maximum number of instructions biased by probabilities of their execution that a loop may have to be unrolled. If a loop is unrolled, this parameter also determines how many times the loop code is unrolled. max-unroll-times The maximum number of unrollings of a single loop. max-peeled-insns The maximum number of instructions that a loop may have to be peeled. If a loop is peeled, this parameter also determines how many times the loop code is peeled. max-peel-times The maximum number of peelings of a single loop. max-peel-branches The maximum number of branches on the hot path through the peeled sequence. max-completely-peeled-insns The maximum number of insns of a completely peeled loop. max-completely-peel-times The maximum number of iterations of a loop to be suitable for complete peeling. max-completely-peel-loop-nest-depth The maximum depth of a loop nest suitable for complete peeling. max-unswitch-insns The maximum number of insns of an unswitched loop. max-unswitch-level The maximum number of branches unswitched in a single loop. lim-expensive The minimum cost of an expensive expression in the loop invariant motion. iv-consider-all-candidates-bound Bound on number of candidates for induction variables, below which all candidates are considered for each use in induction variable optimizations. If there are more candidates than this, only the most relevant ones are considered to avoid quadratic time complexity. iv-max-considered-uses The induction variable optimizations give up on loops that contain more induction variable uses. iv-always-prune-cand-set-bound If the number of candidates in the set is smaller than this value, always try to remove unnecessary ivs from the set when adding a new one. avg-loop-niter Average number of iterations of a loop. dse-max-object-size Maximum size (in bytes) of objects tracked bytewise by dead store elimination. Larger values may result in larger compilation times. dse-max-alias-queries-per-store Maximum number of queries into the alias oracle per store. Larger values result in larger compilation times and may result in more removed dead stores. scev-max-expr-size Bound on size of expressions used in the scalar evolutions analyzer. Large expressions slow the analyzer. scev-max-expr-complexity Bound on the complexity of the expressions in the scalar evolutions analyzer. Complex expressions slow the analyzer. max-tree-if-conversion-phi-args Maximum number of arguments in a PHI supported by TREE if conversion unless the loop is marked with simd pragma. vect-max-version-for-alignment-checks The maximum number of run-time checks that can be performed when doing loop versioning for alignment in the vectorizer. vect-max-version-for-alias-checks The maximum number of run-time checks that can be performed when doing loop versioning for alias in the vectorizer. vect-max-peeling-for-alignment The maximum number of loop peels to enhance access alignment for vectorizer. Value -1 means no limit. max-iterations-to-track The maximum number of iterations of a loop the brute- force algorithm for analysis of the number of iterations of the loop tries to evaluate. hot-bb-count-ws-permille A basic block profile count is considered hot if it contributes to the given permillage (i.e. 0...1000) of the entire profiled execution. hot-bb-frequency-fraction Select fraction of the entry block frequency of executions of basic block in function given basic block needs to have to be considered hot. max-predicted-iterations The maximum number of loop iterations we predict statically. This is useful in cases where a function contains a single loop with known bound and another loop with unknown bound. The known number of iterations is predicted correctly, while the unknown number of iterations average to roughly 10. This means that the loop without bounds appears artificially cold relative to the other one. builtin-expect-probability Control the probability of the expression having the specified value. This parameter takes a percentage (i.e. 0 ... 100) as input. builtin-string-cmp-inline-length The maximum length of a constant string for a builtin string cmp call eligible for inlining. align-threshold Select fraction of the maximal frequency of executions of a basic block in a function to align the basic block. align-loop-iterations A loop expected to iterate at least the selected number of iterations is aligned. tracer-dynamic-coverage tracer-dynamic-coverage-feedback This value is used to limit superblock formation once the given percentage of executed instructions is covered. This limits unnecessary code size expansion. The tracer-dynamic-coverage-feedback parameter is used only when profile feedback is available. The real profiles (as opposed to statically estimated ones) are much less balanced allowing the threshold to be larger value. tracer-max-code-growth Stop tail duplication once code growth has reached given percentage. This is a rather artificial limit, as most of the duplicates are eliminated later in cross jumping, so it may be set to much higher values than is the desired code growth. tracer-min-branch-ratio Stop reverse growth when the reverse probability of best edge is less than this threshold (in percent). tracer-min-branch-probability tracer-min-branch-probability-feedback Stop forward growth if the best edge has probability lower than this threshold. Similarly to tracer-dynamic-coverage two parameters are provided. tracer-min-branch-probability-feedback is used for compilation with profile feedback and tracer-min- branch-probability compilation without. The value for compilation with profile feedback needs to be more conservative (higher) in order to make tracer effective. stack-clash-protection-guard-size Specify the size of the operating system provided stack guard as 2 raised to num bytes. Higher values may reduce the number of explicit probes, but a value larger than the operating system provided guard will leave code vulnerable to stack clash style attacks. stack-clash-protection-probe-interval Stack clash protection involves probing stack space as it is allocated. This param controls the maximum distance between probes into the stack as 2 raised to num bytes. Higher values may reduce the number of explicit probes, but a value larger than the operating system provided guard will leave code vulnerable to stack clash style attacks. max-cse-path-length The maximum number of basic blocks on path that CSE considers. max-cse-insns The maximum number of instructions CSE processes before flushing. ggc-min-expand GCC uses a garbage collector to manage its own memory allocation. This parameter specifies the minimum percentage by which the garbage collector's heap should be allowed to expand between collections. Tuning this may improve compilation speed; it has no effect on code generation. The default is 30% + 70% * (RAM/1GB) with an upper bound of 100% when RAM >= 1GB. If "getrlimit" is available, the notion of "RAM" is the smallest of actual RAM and "RLIMIT_DATA" or "RLIMIT_AS". If GCC is not able to calculate RAM on a particular platform, the lower bound of 30% is used. Setting this parameter and ggc-min- heapsize to zero causes a full collection to occur at every opportunity. This is extremely slow, but can be useful for debugging. ggc-min-heapsize Minimum size of the garbage collector's heap before it begins bothering to collect garbage. The first collection occurs after the heap expands by ggc-min- expand% beyond ggc-min-heapsize. Again, tuning this may improve compilation speed, and has no effect on code generation. The default is the smaller of RAM/8, RLIMIT_RSS, or a limit that tries to ensure that RLIMIT_DATA or RLIMIT_AS are not exceeded, but with a lower bound of 4096 (four megabytes) and an upper bound of 131072 (128 megabytes). If GCC is not able to calculate RAM on a particular platform, the lower bound is used. Setting this parameter very large effectively disables garbage collection. Setting this parameter and ggc-min-expand to zero causes a full collection to occur at every opportunity. max-reload-search-insns The maximum number of instruction reload should look backward for equivalent register. Increasing values mean more aggressive optimization, making the compilation time increase with probably slightly better performance. max-cselib-memory-locations The maximum number of memory locations cselib should take into account. Increasing values mean more aggressive optimization, making the compilation time increase with probably slightly better performance. max-sched-ready-insns The maximum number of instructions ready to be issued the scheduler should consider at any given time during the first scheduling pass. Increasing values mean more thorough searches, making the compilation time increase with probably little benefit. max-sched-region-blocks The maximum number of blocks in a region to be considered for interblock scheduling. max-pipeline-region-blocks The maximum number of blocks in a region to be considered for pipelining in the selective scheduler. max-sched-region-insns The maximum number of insns in a region to be considered for interblock scheduling. max-pipeline-region-insns The maximum number of insns in a region to be considered for pipelining in the selective scheduler. min-spec-prob The minimum probability (in percents) of reaching a source block for interblock speculative scheduling. max-sched-extend-regions-iters The maximum number of iterations through CFG to extend regions. A value of 0 disables region extensions. max-sched-insn-conflict-delay The maximum conflict delay for an insn to be considered for speculative motion. sched-spec-prob-cutoff The minimal probability of speculation success (in percents), so that speculative insns are scheduled. sched-state-edge-prob-cutoff The minimum probability an edge must have for the scheduler to save its state across it. sched-mem-true-dep-cost Minimal distance (in CPU cycles) between store and load targeting same memory locations. selsched-max-lookahead The maximum size of the lookahead window of selective scheduling. It is a depth of search for available instructions. selsched-max-sched-times The maximum number of times that an instruction is scheduled during selective scheduling. This is the limit on the number of iterations through which the instruction may be pipelined. selsched-insns-to-rename The maximum number of best instructions in the ready list that are considered for renaming in the selective scheduler. sms-min-sc The minimum value of stage count that swing modulo scheduler generates. max-last-value-rtl The maximum size measured as number of RTLs that can be recorded in an expression in combiner for a pseudo register as last known value of that register. max-combine-insns The maximum number of instructions the RTL combiner tries to combine. integer-share-limit Small integer constants can use a shared data structure, reducing the compiler's memory usage and increasing its speed. This sets the maximum value of a shared integer constant. ssp-buffer-size The minimum size of buffers (i.e. arrays) that receive stack smashing protection when -fstack-protection is used. min-size-for-stack-sharing The minimum size of variables taking part in stack slot sharing when not optimizing. max-jump-thread-duplication-stmts Maximum number of statements allowed in a block that needs to be duplicated when threading jumps. max-fields-for-field-sensitive Maximum number of fields in a structure treated in a field sensitive manner during pointer analysis. prefetch-latency Estimate on average number of instructions that are executed before prefetch finishes. The distance prefetched ahead is proportional to this constant. Increasing this number may also lead to less streams being prefetched (see simultaneous-prefetches). simultaneous-prefetches Maximum number of prefetches that can run at the same time. l1-cache-line-size The size of cache line in L1 data cache, in bytes. l1-cache-size The size of L1 data cache, in kilobytes. l2-cache-size The size of L2 data cache, in kilobytes. prefetch-dynamic-strides Whether the loop array prefetch pass should issue software prefetch hints for strides that are non- constant. In some cases this may be beneficial, though the fact the stride is non-constant may make it hard to predict when there is clear benefit to issuing these hints. Set to 1 if the prefetch hints should be issued for non- constant strides. Set to 0 if prefetch hints should be issued only for strides that are known to be constant and below prefetch-minimum-stride. prefetch-minimum-stride Minimum constant stride, in bytes, to start using prefetch hints for. If the stride is less than this threshold, prefetch hints will not be issued. This setting is useful for processors that have hardware prefetchers, in which case there may be conflicts between the hardware prefetchers and the software prefetchers. If the hardware prefetchers have a maximum stride they can handle, it should be used here to improve the use of software prefetchers. A value of -1 means we don't have a threshold and therefore prefetch hints can be issued for any constant stride. This setting is only useful for strides that are known and constant. loop-interchange-max-num-stmts The maximum number of stmts in a loop to be interchanged. loop-interchange-stride-ratio The minimum ratio between stride of two loops for interchange to be profitable. min-insn-to-prefetch-ratio The minimum ratio between the number of instructions and the number of prefetches to enable prefetching in a loop. prefetch-min-insn-to-mem-ratio The minimum ratio between the number of instructions and the number of memory references to enable prefetching in a loop. use-canonical-types Whether the compiler should use the "canonical" type system. Should always be 1, which uses a more efficient internal mechanism for comparing types in C++ and Objective-C++. However, if bugs in the canonical type system are causing compilation failures, set this value to 0 to disable canonical types. switch-conversion-max-branch-ratio Switch initialization conversion refuses to create arrays that are bigger than switch-conversion-max-branch-ratio times the number of branches in the switch. max-partial-antic-length Maximum length of the partial antic set computed during the tree partial redundancy elimination optimization (-ftree-pre) when optimizing at -O3 and above. For some sorts of source code the enhanced partial redundancy elimination optimization can run away, consuming all of the memory available on the host machine. This parameter sets a limit on the length of the sets that are computed, which prevents the runaway behavior. Setting a value of 0 for this parameter allows an unlimited set length. rpo-vn-max-loop-depth Maximum loop depth that is value-numbered optimistically. When the limit hits the innermost rpo-vn-max-loop-depth loops and the outermost loop in the loop nest are value- numbered optimistically and the remaining ones not. sccvn-max-alias-queries-per-access Maximum number of alias-oracle queries we perform when looking for redundancies for loads and stores. If this limit is hit the search is aborted and the load or store is not considered redundant. The number of queries is algorithmically limited to the number of stores on all paths from the load to the function entry. ira-max-loops-num IRA uses regional register allocation by default. If a function contains more loops than the number given by this parameter, only at most the given number of the most frequently-executed loops form regions for regional register allocation. ira-max-conflict-table-size Although IRA uses a sophisticated algorithm to compress the conflict table, the table can still require excessive amounts of memory for huge functions. If the conflict table for a function could be more than the size in MB given by this parameter, the register allocator instead uses a faster, simpler, and lower-quality algorithm that does not require building a pseudo-register conflict table. ira-loop-reserved-regs IRA can be used to evaluate more accurate register pressure in loops for decisions to move loop invariants (see -O3). The number of available registers reserved for some other purposes is given by this parameter. Default of the parameter is the best found from numerous experiments. lra-inheritance-ebb-probability-cutoff LRA tries to reuse values reloaded in registers in subsequent insns. This optimization is called inheritance. EBB is used as a region to do this optimization. The parameter defines a minimal fall- through edge probability in percentage used to add BB to inheritance EBB in LRA. The default value was chosen from numerous runs of SPEC2000 on x86-64. loop-invariant-max-bbs-in-loop Loop invariant motion can be very expensive, both in compilation time and in amount of needed compile-time memory, with very large loops. Loops with more basic blocks than this parameter won't have loop invariant motion optimization performed on them. loop-max-datarefs-for-datadeps Building data dependencies is expensive for very large loops. This parameter limits the number of data references in loops that are considered for data dependence analysis. These large loops are no handled by the optimizations using loop data dependencies. max-vartrack-size Sets a maximum number of hash table slots to use during variable tracking dataflow analysis of any function. If this limit is exceeded with variable tracking at assignments enabled, analysis for that function is retried without it, after removing all debug insns from the function. If the limit is exceeded even without debug insns, var tracking analysis is completely disabled for the function. Setting the parameter to zero makes it unlimited. max-vartrack-expr-depth Sets a maximum number of recursion levels when attempting to map variable names or debug temporaries to value expressions. This trades compilation time for more complete debug information. If this is set too low, value expressions that are available and could be represented in debug information may end up not being used; setting this higher may enable the compiler to find more complex debug expressions, but compile time and memory use may grow. max-debug-marker-count Sets a threshold on the number of debug markers (e.g. begin stmt markers) to avoid complexity explosion at inlining or expanding to RTL. If a function has more such gimple stmts than the set limit, such stmts will be dropped from the inlined copy of a function, and from its RTL expansion. min-nondebug-insn-uid Use uids starting at this parameter for nondebug insns. The range below the parameter is reserved exclusively for debug insns created by -fvar-tracking-assignments, but debug insns may get (non-overlapping) uids above it if the reserved range is exhausted. ipa-sra-ptr-growth-factor IPA-SRA replaces a pointer to an aggregate with one or more new parameters only when their cumulative size is less or equal to ipa-sra-ptr-growth-factor times the size of the original pointer parameter. sra-max-scalarization-size-Ospeed sra-max-scalarization-size-Osize The two Scalar Reduction of Aggregates passes (SRA and IPA-SRA) aim to replace scalar parts of aggregates with uses of independent scalar variables. These parameters control the maximum size, in storage units, of aggregate which is considered for replacement when compiling for speed (sra-max-scalarization-size-Ospeed) or size (sra- max-scalarization-size-Osize) respectively. sra-max-propagations The maximum number of artificial accesses that Scalar Replacement of Aggregates (SRA) will track, per one local variable, in order to facilitate copy propagation. tm-max-aggregate-size When making copies of thread-local variables in a transaction, this parameter specifies the size in bytes after which variables are saved with the logging functions as opposed to save/restore code sequence pairs. This option only applies when using -fgnu-tm. graphite-max-nb-scop-params To avoid exponential effects in the Graphite loop transforms, the number of parameters in a Static Control Part (SCoP) is bounded. A value of zero can be used to lift the bound. A variable whose value is unknown at compilation time and defined outside a SCoP is a parameter of the SCoP. loop-block-tile-size Loop blocking or strip mining transforms, enabled with -floop-block or -floop-strip-mine, strip mine each loop in the loop nest by a given number of iterations. The strip length can be changed using the loop-block-tile- size parameter. ipa-cp-value-list-size IPA-CP attempts to track all possible values and types passed to a function's parameter in order to propagate them and perform devirtualization. ipa-cp-value-list- size is the maximum number of values and types it stores per one formal parameter of a function. ipa-cp-eval-threshold IPA-CP calculates its own score of cloning profitability heuristics and performs those cloning opportunities with scores that exceed ipa-cp-eval-threshold. ipa-cp-recursion-penalty Percentage penalty the recursive functions will receive when they are evaluated for cloning. ipa-cp-single-call-penalty Percentage penalty functions containing a single call to another function will receive when they are evaluated for cloning. ipa-max-agg-items IPA-CP is also capable to propagate a number of scalar values passed in an aggregate. ipa-max-agg-items controls the maximum number of such values per one parameter. ipa-cp-loop-hint-bonus When IPA-CP determines that a cloning candidate would make the number of iterations of a loop known, it adds a bonus of ipa-cp-loop-hint-bonus to the profitability score of the candidate. ipa-cp-array-index-hint-bonus When IPA-CP determines that a cloning candidate would make the index of an array access known, it adds a bonus of ipa-cp-array-index-hint-bonus to the profitability score of the candidate. ipa-max-aa-steps During its analysis of function bodies, IPA-CP employs alias analysis in order to track values pointed to by function parameters. In order not spend too much time analyzing huge functions, it gives up and consider all memory clobbered after examining ipa-max-aa-steps statements modifying memory. lto-partitions Specify desired number of partitions produced during WHOPR compilation. The number of partitions should exceed the number of CPUs used for compilation. lto-min-partition Size of minimal partition for WHOPR (in estimated instructions). This prevents expenses of splitting very small programs into too many partitions. lto-max-partition Size of max partition for WHOPR (in estimated instructions). to provide an upper bound for individual size of partition. Meant to be used only with balanced partitioning. lto-max-streaming-parallelism Maximal number of parallel processes used for LTO streaming. cxx-max-namespaces-for-diagnostic-help The maximum number of namespaces to consult for suggestions when C++ name lookup fails for an identifier. sink-frequency-threshold The maximum relative execution frequency (in percents) of the target block relative to a statement's original block to allow statement sinking of a statement. Larger numbers result in more aggressive statement sinking. A small positive adjustment is applied for statements with memory operands as those are even more profitable so sink. max-stores-to-sink The maximum number of conditional store pairs that can be sunk. Set to 0 if either vectorization (-ftree-vectorize) or if-conversion (-ftree-loop-if-convert) is disabled. allow-store-data-races Allow optimizers to introduce new data races on stores. Set to 1 to allow, otherwise to 0. case-values-threshold The smallest number of different values for which it is best to use a jump-table instead of a tree of conditional branches. If the value is 0, use the default for the machine. tree-reassoc-width Set the maximum number of instructions executed in parallel in reassociated tree. This parameter overrides target dependent heuristics used by default if has non zero value. sched-pressure-algorithm Choose between the two available implementations of -fsched-pressure. Algorithm 1 is the original implementation and is the more likely to prevent instructions from being reordered. Algorithm 2 was designed to be a compromise between the relatively conservative approach taken by algorithm 1 and the rather aggressive approach taken by the default scheduler. It relies more heavily on having a regular register file and accurate register pressure classes. See haifa-sched.c in the GCC sources for more details. The default choice depends on the target. max-slsr-cand-scan Set the maximum number of existing candidates that are considered when seeking a basis for a new straight-line strength reduction candidate. asan-globals Enable buffer overflow detection for global objects. This kind of protection is enabled by default if you are using -fsanitize=address option. To disable global objects protection use --param asan-globals=0. asan-stack Enable buffer overflow detection for stack objects. This kind of protection is enabled by default when using -fsanitize=address. To disable stack protection use --param asan-stack=0 option. asan-instrument-reads Enable buffer overflow detection for memory reads. This kind of protection is enabled by default when using -fsanitize=address. To disable memory reads protection use --param asan-instrument-reads=0. asan-instrument-writes Enable buffer overflow detection for memory writes. This kind of protection is enabled by default when using -fsanitize=address. To disable memory writes protection use --param asan-instrument-writes=0 option. asan-memintrin Enable detection for built-in functions. This kind of protection is enabled by default when using -fsanitize=address. To disable built-in functions protection use --param asan-memintrin=0. asan-use-after-return Enable detection of use-after-return. This kind of protection is enabled by default when using the -fsanitize=address option. To disable it use --param asan-use-after-return=0. Note: By default the check is disabled at run time. To enable it, add "detect_stack_use_after_return=1" to the environment variable ASAN_OPTIONS. asan-instrumentation-with-call-threshold If number of memory accesses in function being instrumented is greater or equal to this number, use callbacks instead of inline checks. E.g. to disable inline code use --param asan-instrumentation-with-call-threshold=0. use-after-scope-direct-emission-threshold If the size of a local variable in bytes is smaller or equal to this number, directly poison (or unpoison) shadow memory instead of using run-time callbacks. max-fsm-thread-path-insns Maximum number of instructions to copy when duplicating blocks on a finite state automaton jump thread path. max-fsm-thread-length Maximum number of basic blocks on a finite state automaton jump thread path. max-fsm-thread-paths Maximum number of new jump thread paths to create for a finite state automaton. parloops-chunk-size Chunk size of omp schedule for loops parallelized by parloops. parloops-schedule Schedule type of omp schedule for loops parallelized by parloops (static, dynamic, guided, auto, runtime). parloops-min-per-thread The minimum number of iterations per thread of an innermost parallelized loop for which the parallelized variant is preferred over the single threaded one. Note that for a parallelized loop nest the minimum number of iterations of the outermost loop per thread is two. max-ssa-name-query-depth Maximum depth of recursion when querying properties of SSA names in things like fold routines. One level of recursion corresponds to following a use-def chain. hsa-gen-debug-stores Enable emission of special debug stores within HSA kernels which are then read and reported by libgomp plugin. Generation of these stores is disabled by default, use --param hsa-gen-debug-stores=1 to enable it. max-speculative-devirt-maydefs The maximum number of may-defs we analyze when looking for a must-def specifying the dynamic type of an object that invokes a virtual call we may be able to devirtualize speculatively. max-vrp-switch-assertions The maximum number of assertions to add along the default edge of a switch statement during VRP. unroll-jam-min-percent The minimum percentage of memory references that must be optimized away for the unroll-and-jam transformation to be considered profitable. unroll-jam-max-unroll The maximum number of times the outer loop should be unrolled by the unroll-and-jam transformation. max-rtl-if-conversion-unpredictable-cost Maximum permissible cost for the sequence that would be generated by the RTL if-conversion pass for a branch that is considered unpredictable. max-variable-expansions-in-unroller If -fvariable-expansion-in-unroller is used, the maximum number of times that an individual variable will be expanded during loop unrolling. tracer-min-branch-probability-feedback Stop forward growth if the probability of best edge is less than this threshold (in percent). Used when profile feedback is available. partial-inlining-entry-probability Maximum probability of the entry BB of split region (in percent relative to entry BB of the function) to make partial inlining happen. max-tracked-strlens Maximum number of strings for which strlen optimization pass will track string lengths. gcse-after-reload-partial-fraction The threshold ratio for performing partial redundancy elimination after reload. gcse-after-reload-critical-fraction The threshold ratio of critical edges execution count that permit performing redundancy elimination after reload. max-loop-header-insns The maximum number of insns in loop header duplicated by the copy loop headers pass. vect-epilogues-nomask Enable loop epilogue vectorization using smaller vector size. slp-max-insns-in-bb Maximum number of instructions in basic block to be considered for SLP vectorization. avoid-fma-max-bits Maximum number of bits for which we avoid creating FMAs. sms-loop-average-count-threshold A threshold on the average loop count considered by the swing modulo scheduler. sms-dfa-history The number of cycles the swing modulo scheduler considers when checking conflicts using DFA. hot-bb-count-fraction Select fraction of the maximal count of repetitions of basic block in program given basic block needs to have to be considered hot (used in non-LTO mode) max-inline-insns-recursive-auto The maximum number of instructions non-inline function can grow to via recursive inlining. graphite-allow-codegen-errors Whether codegen errors should be ICEs when -fchecking. sms-max-ii-factor A factor for tuning the upper bound that swing modulo scheduler uses for scheduling a loop. lra-max-considered-reload-pseudos The max number of reload pseudos which are considered during spilling a non-reload pseudo. max-pow-sqrt-depth Maximum depth of sqrt chains to use when synthesizing exponentiation by a real constant. max-dse-active-local-stores Maximum number of active local stores in RTL dead store elimination. asan-instrument-allocas Enable asan allocas/VLAs protection. max-iterations-computation-cost Bound on the cost of an expression to compute the number of iterations. max-isl-operations Maximum number of isl operations, 0 means unlimited. graphite-max-arrays-per-scop Maximum number of arrays per scop. max-vartrack-reverse-op-size Max. size of loc list for which reverse ops should be added. unlikely-bb-count-fraction The minimum fraction of profile runs a given basic block execution count must be not to be considered unlikely. tracer-dynamic-coverage-feedback The percentage of function, weighted by execution frequency, that must be covered by trace formation. Used when profile feedback is available. max-inline-recursive-depth-auto The maximum depth of recursive inlining for non-inline functions. fsm-scale-path-stmts Scale factor to apply to the number of statements in a threading path when comparing to the number of (scaled) blocks. fsm-maximum-phi-arguments Maximum number of arguments a PHI may have before the FSM threader will not try to thread through its block. uninit-control-dep-attempts Maximum number of nested calls to search for control dependencies during uninitialized variable analysis. indir-call-topn-profile Track top N target addresses in indirect-call profile. max-once-peeled-insns The maximum number of insns of a peeled loop that rolls only once. sra-max-scalarization-size-Osize Maximum size, in storage units, of an aggregate which should be considered for scalarization when compiling for size. fsm-scale-path-blocks Scale factor to apply to the number of blocks in a threading path when comparing to the number of (scaled) statements. sched-autopref-queue-depth Hardware autoprefetcher scheduler model control flag. Number of lookahead cycles the model looks into; at ' ' only enable instruction sorting heuristic. loop-versioning-max-inner-insns The maximum number of instructions that an inner loop can have before the loop versioning pass considers it too big to copy. loop-versioning-max-outer-insns The maximum number of instructions that an outer loop can have before the loop versioning pass considers it too big to copy, discounting any instructions in inner loops that directly benefit from versioning. ssa-name-def-chain-limit The maximum number of SSA_NAME assignments to follow in determining a property of a variable such as its value. This limits the number of iterations or recursive calls GCC performs when optimizing certain statements or when determining their validity prior to issuing diagnostics. Program Instrumentation Options GCC supports a number of command-line options that control adding run-time instrumentation to the code it normally generates. For example, one purpose of instrumentation is collect profiling statistics for use in finding program hot spots, code coverage analysis, or profile-guided optimizations. Another class of program instrumentation is adding run-time checking to detect programming errors like invalid pointer dereferences or out-of- bounds array accesses, as well as deliberately hostile attacks such as stack smashing or C++ vtable hijacking. There is also a general hook which can be used to implement other forms of tracing or function-level instrumentation for debug or program analysis purposes. -p -pg Generate extra code to write profile information suitable for the analysis program prof (for -p) or gprof (for -pg). You must use this option when compiling the source files you want data about, and you must also use it when linking. You can use the function attribute "no_instrument_function" to suppress profiling of individual functions when compiling with these options. -fprofile-arcs Add code so that program flow arcs are instrumented. During execution the program records how many times each branch and call is executed and how many times it is taken or returns. On targets that support constructors with priority support, profiling properly handles constructors, destructors and C++ constructors (and destructors) of classes which are used as a type of a global variable. When the compiled program exits it saves this data to a file called auxname.gcda for each source file. The data may be used for profile-directed optimizations (-fbranch-probabilities), or for test coverage analysis (-ftest-coverage). Each object file's auxname is generated from the name of the output file, if explicitly specified and it is not the final executable, otherwise it is the basename of the source file. In both cases any suffix is removed (e.g. foo.gcda for input file dir/foo.c, or dir/foo.gcda for output file specified as -o dir/foo.o). --coverage This option is used to compile and link code instrumented for coverage analysis. The option is a synonym for -fprofile-arcs -ftest-coverage (when compiling) and -lgcov (when linking). See the documentation for those options for more details. * Compile the source files with -fprofile-arcs plus optimization and code generation options. For test coverage analysis, use the additional -ftest-coverage option. You do not need to profile every source file in a program. * Compile the source files additionally with -fprofile-abs-path to create absolute path names in the .gcno files. This allows gcov to find the correct sources in projects where compilations occur with different working directories. * Link your object files with -lgcov or -fprofile-arcs (the latter implies the former). * Run the program on a representative workload to generate the arc profile information. This may be repeated any number of times. You can run concurrent instances of your program, and provided that the file system supports locking, the data files will be correctly updated. Unless a strict ISO C dialect option is in effect, "fork" calls are detected and correctly handled without double counting. * For profile-directed optimizations, compile the source files again with the same optimization and code generation options plus -fbranch-probabilities. * For test coverage analysis, use gcov to produce human readable information from the .gcno and .gcda files. Refer to the gcov documentation for further information. With -fprofile-arcs, for each function of your program GCC creates a program flow graph, then finds a spanning tree for the graph. Only arcs that are not on the spanning tree have to be instrumented: the compiler adds code to count the number of times that these arcs are executed. When an arc is the only exit or only entrance to a block, the instrumentation code can be added to the block; otherwise, a new basic block must be created to hold the instrumentation code. -ftest-coverage Produce a notes file that the gcov code-coverage utility can use to show program coverage. Each source file's note file is called auxname.gcno. Refer to the -fprofile-arcs option above for a description of auxname and instructions on how to generate test coverage data. Coverage data matches the source files more closely if you do not optimize. -fprofile-abs-path Automatically convert relative source file names to absolute path names in the .gcno files. This allows gcov to find the correct sources in projects where compilations occur with different working directories. -fprofile-dir=path Set the directory to search for the profile data files in to path. This option affects only the profile data generated by -fprofile-generate, -ftest-coverage, -fprofile-arcs and used by -fprofile-use and -fbranch-probabilities and its related options. Both absolute and relative paths can be used. By default, GCC uses the current directory as path, thus the profile data file appears in the same directory as the object file. In order to prevent the file name clashing, if the object file name is not an absolute path, we mangle the absolute path of the sourcename.gcda file and use it as the file name of a .gcda file. When an executable is run in a massive parallel environment, it is recommended to save profile to different folders. That can be done with variables in path that are exported during run-time: %p process ID. %q{VAR} value of environment variable VAR -fprofile-generate -fprofile-generate=path Enable options usually used for instrumenting application to produce profile useful for later recompilation with profile feedback based optimization. You must use -fprofile-generate both when compiling and when linking your program. The following options are enabled: -fprofile-arcs, -fprofile-values, -finline-functions, and -fipa-bit-cp. If path is specified, GCC looks at the path to find the profile feedback data files. See -fprofile-dir. To optimize the program based on the collected profile information, use -fprofile-use. -fprofile-update=method Alter the update method for an application instrumented for profile feedback based optimization. The method argument should be one of single, atomic or prefer-atomic. The first one is useful for single-threaded applications, while the second one prevents profile corruption by emitting thread- safe code. Warning: When an application does not properly join all threads (or creates an detached thread), a profile file can be still corrupted. Using prefer-atomic would be transformed either to atomic, when supported by a target, or to single otherwise. The GCC driver automatically selects prefer-atomic when -pthread is present in the command line. -fprofile-filter-files=regex Instrument only functions from files where names match any regular expression (separated by a semi-colon). For example, -fprofile-filter-files=main.c;module.*.c will instrument only main.c and all C files starting with 'module'. -fprofile-exclude-files=regex Instrument only functions from files where names do not match all the regular expressions (separated by a semi-colon). For example, -fprofile-exclude-files=/usr/* will prevent instrumentation of all files that are located in /usr/ folder. -fsanitize=address Enable AddressSanitizer, a fast memory error detector. Memory access instructions are instrumented to detect out-of- bounds and use-after-free bugs. The option enables -fsanitize-address-use-after-scope. See <https://github.com/google/sanitizers/wiki/AddressSanitizer > for more details. The run-time behavior can be influenced using the ASAN_OPTIONS environment variable. When set to "help=1", the available options are shown at startup of the instrumented program. See <https://github.com/google/sanitizers/wiki/AddressSanitizerFlags#run-time-flags > for a list of supported options. The option cannot be combined with -fsanitize=thread. -fsanitize=kernel-address Enable AddressSanitizer for Linux kernel. See <https://github.com/google/kasan/wiki > for more details. -fsanitize=pointer-compare Instrument comparison operation (<, <=, >, >=) with pointer operands. The option must be combined with either -fsanitize=kernel-address or -fsanitize=address The option cannot be combined with -fsanitize=thread. Note: By default the check is disabled at run time. To enable it, add "detect_invalid_pointer_pairs=2" to the environment variable ASAN_OPTIONS. Using "detect_invalid_pointer_pairs=1" detects invalid operation only when both pointers are non-null. -fsanitize=pointer-subtract Instrument subtraction with pointer operands. The option must be combined with either -fsanitize=kernel-address or -fsanitize=address The option cannot be combined with -fsanitize=thread. Note: By default the check is disabled at run time. To enable it, add "detect_invalid_pointer_pairs=2" to the environment variable ASAN_OPTIONS. Using "detect_invalid_pointer_pairs=1" detects invalid operation only when both pointers are non-null. -fsanitize=thread Enable ThreadSanitizer, a fast data race detector. Memory access instructions are instrumented to detect data race bugs. See <https://github.com/google/sanitizers/wiki#threadsanitizer > for more details. The run-time behavior can be influenced using the TSAN_OPTIONS environment variable; see <https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags > for a list of supported options. The option cannot be combined with -fsanitize=address, -fsanitize=leak. Note that sanitized atomic builtins cannot throw exceptions when operating on invalid memory addresses with non-call exceptions (-fnon-call-exceptions). -fsanitize=leak Enable LeakSanitizer, a memory leak detector. This option only matters for linking of executables and the executable is linked against a library that overrides "malloc" and other allocator functions. See <https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer > for more details. The run-time behavior can be influenced using the LSAN_OPTIONS environment variable. The option cannot be combined with -fsanitize=thread. -fsanitize=undefined Enable UndefinedBehaviorSanitizer, a fast undefined behavior detector. Various computations are instrumented to detect undefined behavior at runtime. See <https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html > for more details. The run-time behavior can be influenced using the UBSAN_OPTIONS environment variable. Current suboptions are: -fsanitize=shift This option enables checking that the result of a shift operation is not undefined. Note that what exactly is considered undefined differs slightly between C and C++, as well as between ISO C90 and C99, etc. This option has two suboptions, -fsanitize=shift-base and -fsanitize=shift-exponent. -fsanitize=shift-exponent This option enables checking that the second argument of a shift operation is not negative and is smaller than the precision of the promoted first argument. -fsanitize=shift-base If the second argument of a shift operation is within range, check that the result of a shift operation is not undefined. Note that what exactly is considered undefined differs slightly between C and C++, as well as between ISO C90 and C99, etc. -fsanitize=integer-divide-by-zero Detect integer division by zero as well as "INT_MIN / -1" division. -fsanitize=unreachable With this option, the compiler turns the "__builtin_unreachable" call into a diagnostics message call instead. When reaching the "__builtin_unreachable" call, the behavior is undefined. -fsanitize=vla-bound This option instructs the compiler to check that the size of a variable length array is positive. -fsanitize=null This option enables pointer checking. Particularly, the application built with this option turned on will issue an error message when it tries to dereference a NULL pointer, or if a reference (possibly an rvalue reference) is bound to a NULL pointer, or if a method is invoked on an object pointed by a NULL pointer. -fsanitize=return This option enables return statement checking. Programs built with this option turned on will issue an error message when the end of a non-void function is reached without actually returning a value. This option works in C++ only. -fsanitize=signed-integer-overflow This option enables signed integer overflow checking. We check that the result of "+", "*", and both unary and binary "-" does not overflow in the signed arithmetics. Note, integer promotion rules must be taken into account. That is, the following is not an overflow: signed char a = SCHAR_MAX; a++; -fsanitize=bounds This option enables instrumentation of array bounds. Various out of bounds accesses are detected. Flexible array members, flexible array member-like arrays, and initializers of variables with static storage are not instrumented. -fsanitize=bounds-strict This option enables strict instrumentation of array bounds. Most out of bounds accesses are detected, including flexible array members and flexible array member-like arrays. Initializers of variables with static storage are not instrumented. -fsanitize=alignment This option enables checking of alignment of pointers when they are dereferenced, or when a reference is bound to insufficiently aligned target, or when a method or constructor is invoked on insufficiently aligned object. -fsanitize=object-size This option enables instrumentation of memory references using the "__builtin_object_size" function. Various out of bounds pointer accesses are detected. -fsanitize=float-divide-by-zero Detect floating-point division by zero. Unlike other similar options, -fsanitize=float-divide-by-zero is not enabled by -fsanitize=undefined, since floating-point division by zero can be a legitimate way of obtaining infinities and NaNs. -fsanitize=float-cast-overflow This option enables floating-point type to integer conversion checking. We check that the result of the conversion does not overflow. Unlike other similar options, -fsanitize=float-cast-overflow is not enabled by -fsanitize=undefined. This option does not work well with "FE_INVALID" exceptions enabled. -fsanitize=nonnull-attribute This option enables instrumentation of calls, checking whether null values are not passed to arguments marked as requiring a non-null value by the "nonnull" function attribute. -fsanitize=returns-nonnull-attribute This option enables instrumentation of return statements in functions marked with "returns_nonnull" function attribute, to detect returning of null values from such functions. -fsanitize=bool This option enables instrumentation of loads from bool. If a value other than 0/1 is loaded, a run-time error is issued. -fsanitize=enum This option enables instrumentation of loads from an enum type. If a value outside the range of values for the enum type is loaded, a run-time error is issued. -fsanitize=vptr This option enables instrumentation of C++ member function calls, member accesses and some conversions between pointers to base and derived classes, to verify the referenced object has the correct dynamic type. -fsanitize=pointer-overflow This option enables instrumentation of pointer arithmetics. If the pointer arithmetics overflows, a run-time error is issued. -fsanitize=builtin This option enables instrumentation of arguments to selected builtin functions. If an invalid value is passed to such arguments, a run-time error is issued. E.g. passing 0 as the argument to "__builtin_ctz" or "__builtin_clz" invokes undefined behavior and is diagnosed by this option. While -ftrapv causes traps for signed overflows to be emitted, -fsanitize=undefined gives a diagnostic message. This currently works only for the C family of languages. -fno-sanitize=all This option disables all previously enabled sanitizers. -fsanitize=all is not allowed, as some sanitizers cannot be used together. -fasan-shadow-offset=number This option forces GCC to use custom shadow offset in AddressSanitizer checks. It is useful for experimenting with different shadow memory layouts in Kernel AddressSanitizer. -fsanitize-sections=s1,s2,... Sanitize global variables in selected user-defined sections. si may contain wildcards. -fsanitize-recover[=opts] -fsanitize-recover= controls error recovery mode for sanitizers mentioned in comma-separated list of opts. Enabling this option for a sanitizer component causes it to attempt to continue running the program as if no error happened. This means multiple runtime errors can be reported in a single program run, and the exit code of the program may indicate success even when errors have been reported. The -fno-sanitize-recover= option can be used to alter this behavior: only the first detected error is reported and program then exits with a non-zero exit code. Currently this feature only works for -fsanitize=undefined (and its suboptions except for -fsanitize=unreachable and -fsanitize=return), -fsanitize=float-cast-overflow, -fsanitize=float-divide-by-zero, -fsanitize=bounds-strict, -fsanitize=kernel-address and -fsanitize=address. For these sanitizers error recovery is turned on by default, except -fsanitize=address, for which this feature is experimental. -fsanitize-recover=all and -fno-sanitize-recover=all is also accepted, the former enables recovery for all sanitizers that support it, the latter disables recovery for all sanitizers that support it. Even if a recovery mode is turned on the compiler side, it needs to be also enabled on the runtime library side, otherwise the failures are still fatal. The runtime library defaults to "halt_on_error=0" for ThreadSanitizer and UndefinedBehaviorSanitizer, while default value for AddressSanitizer is "halt_on_error=1". This can be overridden through setting the "halt_on_error" flag in the corresponding environment variable. Syntax without an explicit opts parameter is deprecated. It is equivalent to specifying an opts list of: undefined,float-cast-overflow,float-divide-by-zero,bounds-strict -fsanitize-address-use-after-scope Enable sanitization of local variables to detect use-after- scope bugs. The option sets -fstack-reuse to none. -fsanitize-undefined-trap-on-error The -fsanitize-undefined-trap-on-error option instructs the compiler to report undefined behavior using "__builtin_trap" rather than a "libubsan" library routine. The advantage of this is that the "libubsan" library is not needed and is not linked in, so this is usable even in freestanding environments. -fsanitize-coverage=trace-pc Enable coverage-guided fuzzing code instrumentation. Inserts a call to "__sanitizer_cov_trace_pc" into every basic block. -fsanitize-coverage=trace-cmp Enable dataflow guided fuzzing code instrumentation. Inserts a call to "__sanitizer_cov_trace_cmp1", "__sanitizer_cov_trace_cmp2", "__sanitizer_cov_trace_cmp4" or "__sanitizer_cov_trace_cmp8" for integral comparison with both operands variable or "__sanitizer_cov_trace_const_cmp1", "__sanitizer_cov_trace_const_cmp2", "__sanitizer_cov_trace_const_cmp4" or "__sanitizer_cov_trace_const_cmp8" for integral comparison with one operand constant, "__sanitizer_cov_trace_cmpf" or "__sanitizer_cov_trace_cmpd" for float or double comparisons and "__sanitizer_cov_trace_switch" for switch statements. -fcf-protection=[full|branch|return|none] Enable code instrumentation of control-flow transfers to increase program security by checking that target addresses of control-flow transfer instructions (such as indirect function call, function return, indirect jump) are valid. This prevents diverting the flow of control to an unexpected target. This is intended to protect against such threats as Return-oriented Programming (ROP), and similarly call/jmp-oriented programming (COP/JOP). The value "branch" tells the compiler to implement checking of validity of control-flow transfer at the point of indirect branch instructions, i.e. call/jmp instructions. The value "return" implements checking of validity at the point of returning from a function. The value "full" is an alias for specifying both "branch" and "return". The value "none" turns off instrumentation. The macro "__CET__" is defined when -fcf-protection is used. The first bit of "__CET__" is set to 1 for the value "branch" and the second bit of "__CET__" is set to 1 for the "return". You can also use the "nocf_check" attribute to identify which functions and calls should be skipped from instrumentation. Currently the x86 GNU/Linux target provides an implementation based on Intel Control-flow Enforcement Technology (CET) which works for i686 processor or newer. -fstack-protector Emit extra code to check for buffer overflows, such as stack smashing attacks. This is done by adding a guard variable to functions with vulnerable objects. This includes functions that call "alloca", and functions with buffers larger than 8 bytes. The guards are initialized when a function is entered and then checked when the function exits. If a guard check fails, an error message is printed and the program exits. -fstack-protector-all Like -fstack-protector except that all functions are protected. -fstack-protector-strong Like -fstack-protector but includes additional functions to be protected --- those that have local array definitions, or have references to local frame addresses. -fstack-protector-explicit Like -fstack-protector but only protects those functions which have the "stack_protect" attribute. -fstack-check Generate code to verify that you do not go beyond the boundary of the stack. You should specify this flag if you are running in an environment with multiple threads, but you only rarely need to specify it in a single-threaded environment since stack overflow is automatically detected on nearly all systems if there is only one stack. Note that this switch does not actually cause checking to be done; the operating system or the language runtime must do that. The switch causes generation of code to ensure that they see the stack being extended. You can additionally specify a string parameter: no means no checking, generic means force the use of old-style checking, specific means use the best checking method and is equivalent to bare -fstack-check. Old-style checking is a generic mechanism that requires no specific target support in the compiler but comes with the following drawbacks: 1. Modified allocation strategy for large objects: they are always allocated dynamically if their size exceeds a fixed threshold. Note this may change the semantics of some code. 2. Fixed limit on the size of the static frame of functions: when it is topped by a particular function, stack checking is not reliable and a warning is issued by the compiler. 3. Inefficiency: because of both the modified allocation strategy and the generic implementation, code performance is hampered. Note that old-style stack checking is also the fallback method for specific if no target support has been added in the compiler. -fstack-check= is designed for Ada's needs to detect infinite recursion and stack overflows. specific is an excellent choice when compiling Ada code. It is not generally sufficient to protect against stack-clash attacks. To protect against those you want -fstack-clash-protection. -fstack-clash-protection Generate code to prevent stack clash style attacks. When this option is enabled, the compiler will only allocate one page of stack space at a time and each page is accessed immediately after allocation. Thus, it prevents allocations from jumping over any stack guard page provided by the operating system. Most targets do not fully support stack clash protection. However, on those targets -fstack-clash-protection will protect dynamic stack allocations. -fstack-clash-protection may also provide limited protection for static stack allocations if the target supports -fstack-check=specific. -fstack-limit-register=reg -fstack-limit-symbol=sym -fno-stack-limit Generate code to ensure that the stack does not grow beyond a certain value, either the value of a register or the address of a symbol. If a larger stack is required, a signal is raised at run time. For most targets, the signal is raised before the stack overruns the boundary, so it is possible to catch the signal without taking special precautions. For instance, if the stack starts at absolute address 0x80000000 and grows downwards, you can use the flags -fstack-limit-symbol=__stack_limit and -Wl,--defsym,__stack_limit=0x7ffe0000 to enforce a stack limit of 128KB. Note that this may only work with the GNU linker. You can locally override stack limit checking by using the "no_stack_limit" function attribute. -fsplit-stack Generate code to automatically split the stack before it overflows. The resulting program has a discontiguous stack which can only overflow if the program is unable to allocate any more memory. This is most useful when running threaded programs, as it is no longer necessary to calculate a good stack size to use for each thread. This is currently only implemented for the x86 targets running GNU/Linux. When code compiled with -fsplit-stack calls code compiled without -fsplit-stack, there may not be much stack space available for the latter code to run. If compiling all code, including library code, with -fsplit-stack is not an option, then the linker can fix up these calls so that the code compiled without -fsplit-stack always has a large stack. Support for this is implemented in the gold linker in GNU binutils release 2.21 and later. -fvtable-verify=[std|preinit|none] This option is only available when compiling C++ code. It turns on (or off, if using -fvtable-verify=none) the security feature that verifies at run time, for every virtual call, that the vtable pointer through which the call is made is valid for the type of the object, and has not been corrupted or overwritten. If an invalid vtable pointer is detected at run time, an error is reported and execution of the program is immediately halted. This option causes run-time data structures to be built at program startup, which are used for verifying the vtable pointers. The options std and preinit control the timing of when these data structures are built. In both cases the data structures are built before execution reaches "main". Using -fvtable-verify=std causes the data structures to be built after shared libraries have been loaded and initialized. -fvtable-verify=preinit causes them to be built before shared libraries have been loaded and initialized. If this option appears multiple times in the command line with different values specified, none takes highest priority over both std and preinit; preinit takes priority over std. -fvtv-debug When used in conjunction with -fvtable-verify=std or -fvtable-verify=preinit, causes debug versions of the runtime functions for the vtable verification feature to be called. This flag also causes the compiler to log information about which vtable pointers it finds for each class. This information is written to a file named vtv_set_ptr_data.log in the directory named by the environment variable VTV_LOGS_DIR if that is defined or the current working directory otherwise. Note: This feature appends data to the log file. If you want a fresh log file, be sure to delete any existing one. -fvtv-counts This is a debugging flag. When used in conjunction with -fvtable-verify=std or -fvtable-verify=preinit, this causes the compiler to keep track of the total number of virtual calls it encounters and the number of verifications it inserts. It also counts the number of calls to certain run- time library functions that it inserts and logs this information for each compilation unit. The compiler writes this information to a file named vtv_count_data.log in the directory named by the environment variable VTV_LOGS_DIR if that is defined or the current working directory otherwise. It also counts the size of the vtable pointer sets for each class, and writes this information to vtv_class_set_sizes.log in the same directory. Note: This feature appends data to the log files. To get fresh log files, be sure to delete any existing ones. -finstrument-functions Generate instrumentation calls for entry and exit to functions. Just after function entry and just before function exit, the following profiling functions are called with the address of the current function and its call site. (On some platforms, "__builtin_return_address" does not work beyond the current function, so the call site information may not be available to the profiling functions otherwise.) void __cyg_profile_func_enter (void *this_fn, void *call_site); void __cyg_profile_func_exit (void *this_fn, void *call_site); The first argument is the address of the start of the current function, which may be looked up exactly in the symbol table. This instrumentation is also done for functions expanded inline in other functions. The profiling calls indicate where, conceptually, the inline function is entered and exited. This means that addressable versions of such functions must be available. If all your uses of a function are expanded inline, this may mean an additional expansion of code size. If you use "extern inline" in your C code, an addressable version of such functions must be provided. (This is normally the case anyway, but if you get lucky and the optimizer always expands the functions inline, you might have gotten away without providing static copies.) A function may be given the attribute "no_instrument_function", in which case this instrumentation is not done. This can be used, for example, for the profiling functions listed above, high-priority interrupt routines, and any functions from which the profiling functions cannot safely be called (perhaps signal handlers, if the profiling routines generate output or allocate memory). -finstrument-functions-exclude-file-list=file,file,... Set the list of functions that are excluded from instrumentation (see the description of -finstrument-functions). If the file that contains a function definition matches with one of file, then that function is not instrumented. The match is done on substrings: if the file parameter is a substring of the file name, it is considered to be a match. For example: -finstrument-functions-exclude-file-list=/bits/stl,include/sys excludes any inline function defined in files whose pathnames contain /bits/stl or include/sys. If, for some reason, you want to include letter , in one of sym, write ,. For example, -finstrument-functions-exclude-file-list=',,tmp' (note the single quote surrounding the option). -finstrument-functions-exclude-function-list=sym,sym,... This is similar to -finstrument-functions-exclude-file-list, but this option sets the list of function names to be excluded from instrumentation. The function name to be matched is its user-visible name, such as "vector<int> blah(const vector<int> &)", not the internal mangled name (e.g., "_Z4blahRSt6vectorIiSaIiEE"). The match is done on substrings: if the sym parameter is a substring of the function name, it is considered to be a match. For C99 and C++ extended identifiers, the function name must be given in UTF-8, not using universal character names. -fpatchable-function-entry=N[,M] Generate N NOPs right at the beginning of each function, with the function entry point before the Mth NOP. If M is omitted, it defaults to 0 so the function entry points to the address just at the first NOP. The NOP instructions reserve extra space which can be used to patch in any desired instrumentation at run time, provided that the code segment is writable. The amount of space is controllable indirectly via the number of NOPs; the NOP instruction used corresponds to the instruction emitted by the internal GCC back-end interface "gen_nop". This behavior is target-specific and may also depend on the architecture variant and/or other compilation options. For run-time identification, the starting addresses of these areas, which correspond to their respective function entries minus M, are additionally collected in the "__patchable_function_entries" section of the resulting binary. Note that the value of "__attribute__ ((patchable_function_entry (N,M)))" takes precedence over command-line option -fpatchable-function-entry=N,M. This can be used to increase the area size or to remove it completely on a single function. If "N=0", no pad location is recorded. The NOP instructions are inserted at---and maybe before, depending on M---the function entry address, even before the prologue. Options Controlling the Preprocessor These options control the C preprocessor, which is run on each C source file before actual compilation. If you use the -E option, nothing is done except preprocessing. Some of these options make sense only together with -E because they cause the preprocessor output to be unsuitable for actual compilation. In addition to the options listed here, there are a number of options to control search paths for include files documented in Directory Options. Options to control preprocessor diagnostics are listed in Warning Options. -D name Predefine name as a macro, with definition 1. -D name=definition The contents of definition are tokenized and processed as if they appeared during translation phase three in a #define directive. In particular, the definition is truncated by embedded newline characters. If you are invoking the preprocessor from a shell or shell- like program you may need to use the shell's quoting syntax to protect characters such as spaces that have a meaning in the shell syntax. If you wish to define a function-like macro on the command line, write its argument list with surrounding parentheses before the equals sign (if any). Parentheses are meaningful to most shells, so you should quote the option. With sh and csh, -D'name(args...)=definition' works. -D and -U options are processed in the order they are given on the command line. All -imacros file and -include file options are processed after all -D and -U options. -U name Cancel any previous definition of name, either built in or provided with a -D option. -include file Process file as if "#include "file"" appeared as the first line of the primary source file. However, the first directory searched for file is the preprocessor's working directory instead of the directory containing the main source file. If not found there, it is searched for in the remainder of the "#include "..."" search chain as normal. If multiple -include options are given, the files are included in the order they appear on the command line. -imacros file Exactly like -include, except that any output produced by scanning file is thrown away. Macros it defines remain defined. This allows you to acquire all the macros from a header without also processing its declarations. All files specified by -imacros are processed before all files specified by -include. -undef Do not predefine any system-specific or GCC-specific macros. The standard predefined macros remain defined. -pthread Define additional macros required for using the POSIX threads library. You should use this option consistently for both compilation and linking. This option is supported on GNU/Linux targets, most other Unix derivatives, and also on x86 Cygwin and MinGW targets. -M Instead of outputting the result of preprocessing, output a rule suitable for make describing the dependencies of the main source file. The preprocessor outputs one make rule containing the object file name for that source file, a colon, and the names of all the included files, including those coming from -include or -imacros command-line options. Unless specified explicitly (with -MT or -MQ), the object file name consists of the name of the source file with any suffix replaced with object file suffix and with any leading directory parts removed. If there are many included files then the rule is split into several lines using \-newline. The rule has no commands. This option does not suppress the preprocessor's debug output, such as -dM. To avoid mixing such debug output with the dependency rules you should explicitly specify the dependency output file with -MF, or use an environment variable like DEPENDENCIES_OUTPUT. Debug output is still sent to the regular output stream as normal. Passing -M to the driver implies -E, and suppresses warnings with an implicit -w. -MM Like -M but do not mention header files that are found in system header directories, nor header files that are included, directly or indirectly, from such a header. This implies that the choice of angle brackets or double quotes in an #include directive does not in itself determine whether that header appears in -MM dependency output. -MF file When used with -M or -MM, specifies a file to write the dependencies to. If no -MF switch is given the preprocessor sends the rules to the same place it would send preprocessed output. When used with the driver options -MD or -MMD, -MF overrides the default dependency output file. If file is -, then the dependencies are written to stdout. -MG In conjunction with an option such as -M requesting dependency generation, -MG assumes missing header files are generated files and adds them to the dependency list without raising an error. The dependency filename is taken directly from the "#include" directive without prepending any path. -MG also suppresses preprocessed output, as a missing header file renders this useless. This feature is used in automatic updating of makefiles. -MP This option instructs CPP to add a phony target for each dependency other than the main file, causing each to depend on nothing. These dummy rules work around errors make gives if you remove header files without updating the Makefile to match. This is typical output: test.o: test.c test.h test.h: -MT target Change the target of the rule emitted by dependency generation. By default CPP takes the name of the main input file, deletes any directory components and any file suffix such as .c, and appends the platform's usual object suffix. The result is the target. An -MT option sets the target to be exactly the string you specify. If you want multiple targets, you can specify them as a single argument to -MT, or use multiple -MT options. For example, -MT '$(objpfx)foo.o' might give $(objpfx)foo.o: foo.c -MQ target Same as -MT, but it quotes any characters which are special to Make. -MQ '$(objpfx)foo.o' gives $$(objpfx)foo.o: foo.c The default target is automatically quoted, as if it were given with -MQ. -MD -MD is equivalent to -M -MF file, except that -E is not implied. The driver determines file based on whether an -o option is given. If it is, the driver uses its argument but with a suffix of .d, otherwise it takes the name of the input file, removes any directory components and suffix, and applies a .d suffix. If -MD is used in conjunction with -E, any -o switch is understood to specify the dependency output file, but if used without -E, each -o is understood to specify a target object file. Since -E is not implied, -MD can be used to generate a dependency output file as a side effect of the compilation process. -MMD Like -MD except mention only user header files, not system header files. -fpreprocessed Indicate to the preprocessor that the input file has already been preprocessed. This suppresses things like macro expansion, trigraph conversion, escaped newline splicing, and processing of most directives. The preprocessor still recognizes and removes comments, so that you can pass a file preprocessed with -C to the compiler without problems. In this mode the integrated preprocessor is little more than a tokenizer for the front ends. -fpreprocessed is implicit if the input file has one of the extensions .i, .ii or .mi. These are the extensions that GCC uses for preprocessed files created by -save-temps. -fdirectives-only When preprocessing, handle directives, but do not expand macros. The option's behavior depends on the -E and -fpreprocessed options. With -E, preprocessing is limited to the handling of directives such as "#define", "#ifdef", and "#error". Other preprocessor operations, such as macro expansion and trigraph conversion are not performed. In addition, the -dD option is implicitly enabled. With -fpreprocessed, predefinition of command line and most builtin macros is disabled. Macros such as "__LINE__", which are contextually dependent, are handled normally. This enables compilation of files previously preprocessed with "-E -fdirectives-only". With both -E and -fpreprocessed, the rules for -fpreprocessed take precedence. This enables full preprocessing of files previously preprocessed with "-E -fdirectives-only". -fdollars-in-identifiers Accept $ in identifiers. -fextended-identifiers Accept universal character names in identifiers. This option is enabled by default for C99 (and later C standard versions) and C++. -fno-canonical-system-headers When preprocessing, do not shorten system header paths with canonicalization. -ftabstop=width Set the distance between tab stops. This helps the preprocessor report correct column numbers in warnings or errors, even if tabs appear on the line. If the value is less than 1 or greater than 100, the option is ignored. The default is 8. -ftrack-macro-expansion[=level] Track locations of tokens across macro expansions. This allows the compiler to emit diagnostic about the current macro expansion stack when a compilation error occurs in a macro expansion. Using this option makes the preprocessor and the compiler consume more memory. The level parameter can be used to choose the level of precision of token location tracking thus decreasing the memory consumption if necessary. Value 0 of level de-activates this option. Value 1 tracks tokens locations in a degraded mode for the sake of minimal memory overhead. In this mode all tokens resulting from the expansion of an argument of a function-like macro have the same location. Value 2 tracks tokens locations completely. This value is the most memory hungry. When this option is given no argument, the default parameter value is 2. Note that "-ftrack-macro-expansion=2" is activated by default. -fmacro-prefix-map=old=new When preprocessing files residing in directory old, expand the "__FILE__" and "__BASE_FILE__" macros as if the files resided in directory new instead. This can be used to change an absolute path to a relative path by using . for new which can result in more reproducible builds that are location independent. This option also affects "__builtin_FILE()" during compilation. See also -ffile-prefix-map. -fexec-charset=charset Set the execution character set, used for string and character constants. The default is UTF-8. charset can be any encoding supported by the system's "iconv" library routine. -fwide-exec-charset=charset Set the wide execution character set, used for wide string and character constants. The default is UTF-32 or UTF-16, whichever corresponds to the width of "wchar_t". As with -fexec-charset, charset can be any encoding supported by the system's "iconv" library routine; however, you will have problems with encodings that do not fit exactly in "wchar_t". -finput-charset=charset Set the input character set, used for translation from the character set of the input file to the source character set used by GCC. If the locale does not specify, or GCC cannot get this information from the locale, the default is UTF-8. This can be overridden by either the locale or this command- line option. Currently the command-line option takes precedence if there's a conflict. charset can be any encoding supported by the system's "iconv" library routine. -fpch-deps When using precompiled headers, this flag causes the dependency-output flags to also list the files from the precompiled header's dependencies. If not specified, only the precompiled header are listed and not the files that were used to create it, because those files are not consulted when a precompiled header is used. -fpch-preprocess This option allows use of a precompiled header together with -E. It inserts a special "#pragma", "#pragma GCC pch_preprocess "filename"" in the output to mark the place where the precompiled header was found, and its filename. When -fpreprocessed is in use, GCC recognizes this "#pragma" and loads the PCH. This option is off by default, because the resulting preprocessed output is only really suitable as input to GCC. It is switched on by -save-temps. You should not write this "#pragma" in your own code, but it is safe to edit the filename if the PCH file is available in a different location. The filename may be absolute or it may be relative to GCC's current directory. -fworking-directory Enable generation of linemarkers in the preprocessor output that let the compiler know the current working directory at the time of preprocessing. When this option is enabled, the preprocessor emits, after the initial linemarker, a second linemarker with the current working directory followed by two slashes. GCC uses this directory, when it's present in the preprocessed input, as the directory emitted as the current working directory in some debugging information formats. This option is implicitly enabled if debugging information is enabled, but this can be inhibited with the negated form -fno-working-directory. If the -P flag is present in the command line, this option has no effect, since no "#line" directives are emitted whatsoever. -A predicate=answer Make an assertion with the predicate predicate and answer answer. This form is preferred to the older form -A predicate(answer), which is still supported, because it does not use shell special characters. -A -predicate=answer Cancel an assertion with the predicate predicate and answer answer. -C Do not discard comments. All comments are passed through to the output file, except for comments in processed directives, which are deleted along with the directive. You should be prepared for side effects when using -C; it causes the preprocessor to treat comments as tokens in their own right. For example, comments appearing at the start of what would be a directive line have the effect of turning that line into an ordinary source line, since the first token on the line is no longer a #. -CC Do not discard comments, including during macro expansion. This is like -C, except that comments contained within macros are also passed through to the output file where the macro is expanded. In addition to the side effects of the -C option, the -CC option causes all C++-style comments inside a macro to be converted to C-style comments. This is to prevent later use of that macro from inadvertently commenting out the remainder of the source line. The -CC option is generally used to support lint comments. -P Inhibit generation of linemarkers in the output from the preprocessor. This might be useful when running the preprocessor on something that is not C code, and will be sent to a program which might be confused by the linemarkers. -traditional -traditional-cpp Try to imitate the behavior of pre-standard C preprocessors, as opposed to ISO C preprocessors. See the GNU CPP manual for details. Note that GCC does not otherwise attempt to emulate a pre- standard C compiler, and these options are only supported with the -E switch, or when invoking CPP explicitly. -trigraphs Support ISO C trigraphs. These are three-character sequences, all starting with ??, that are defined by ISO C to stand for single characters. For example, ??/ stands for \, so '??/n' is a character constant for a newline. The nine trigraphs and their replacements are Trigraph: ??( ??) ??< ??> ??= ??/ ??' ??! ??- Replacement: [ ] { } # \ ^ | ~ By default, GCC ignores trigraphs, but in standard-conforming modes it converts them. See the -std and -ansi options. -remap Enable special code to work around file systems which only permit very short file names, such as MS-DOS. -H Print the name of each header file used, in addition to other normal activities. Each name is indented to show how deep in the #include stack it is. Precompiled header files are also printed, even if they are found to be invalid; an invalid precompiled header file is printed with ...x and a valid one with ...! . -dletters Says to make debugging dumps during compilation as specified by letters. The flags documented here are those relevant to the preprocessor. Other letters are interpreted by the compiler proper, or reserved for future versions of GCC, and so are silently ignored. If you specify letters whose behavior conflicts, the result is undefined. -dM Instead of the normal output, generate a list of #define directives for all the macros defined during the execution of the preprocessor, including predefined macros. This gives you a way of finding out what is predefined in your version of the preprocessor. Assuming you have no file foo.h, the command touch foo.h; cpp -dM foo.h shows all the predefined macros. If you use -dM without the -E option, -dM is interpreted as a synonym for -fdump-rtl-mach. -dD Like -dM except in two respects: it does not include the predefined macros, and it outputs both the #define directives and the result of preprocessing. Both kinds of output go to the standard output file. -dN Like -dD, but emit only the macro names, not their expansions. -dI Output #include directives in addition to the result of preprocessing. -dU Like -dD except that only macros that are expanded, or whose definedness is tested in preprocessor directives, are output; the output is delayed until the use or test of the macro; and #undef directives are also output for macros tested but undefined at the time. -fdebug-cpp This option is only useful for debugging GCC. When used from CPP or with -E, it dumps debugging information about location maps. Every token in the output is preceded by the dump of the map its location belongs to. When used from GCC without -E, this option has no effect. -Wp,option You can use -Wp,option to bypass the compiler driver and pass option directly through to the preprocessor. If option contains commas, it is split into multiple options at the commas. However, many options are modified, translated or interpreted by the compiler driver before being passed to the preprocessor, and -Wp forcibly bypasses this phase. The preprocessor's direct interface is undocumented and subject to change, so whenever possible you should avoid using -Wp and let the driver handle the options instead. -Xpreprocessor option Pass option as an option to the preprocessor. You can use this to supply system-specific preprocessor options that GCC does not recognize. If you want to pass an option that takes an argument, you must use -Xpreprocessor twice, once for the option and once for the argument. -no-integrated-cpp Perform preprocessing as a separate pass before compilation. By default, GCC performs preprocessing as an integrated part of input tokenization and parsing. If this option is provided, the appropriate language front end (cc1, cc1plus, or cc1obj for C, C++, and Objective-C, respectively) is instead invoked twice, once for preprocessing only and once for actual compilation of the preprocessed input. This option may be useful in conjunction with the -B or -wrapper options to specify an alternate preprocessor or perform additional processing of the program source between normal preprocessing and compilation. Passing Options to the Assembler You can pass options to the assembler. -Wa,option Pass option as an option to the assembler. If option contains commas, it is split into multiple options at the commas. -Xassembler option Pass option as an option to the assembler. You can use this to supply system-specific assembler options that GCC does not recognize. If you want to pass an option that takes an argument, you must use -Xassembler twice, once for the option and once for the argument. Options for Linking These options come into play when the compiler links object files into an executable output file. They are meaningless if the compiler is not doing a link step. object-file-name A file name that does not end in a special recognized suffix is considered to name an object file or library. (Object files are distinguished from libraries by the linker according to the file contents.) If linking is done, these object files are used as input to the linker. -c -S -E If any of these options is used, then the linker is not run, and object file names should not be used as arguments. -flinker-output=type This option controls the code generation of the link time optimizer. By default the linker output is determined by the linker plugin automatically. For debugging the compiler and in the case of incremental linking to non-lto object file is desired, it may be useful to control the type manually. If type is exec the code generation is configured to produce static binary. In this case -fpic and -fpie are both disabled. If type is dyn the code generation is configured to produce shared library. In this case -fpic or -fPIC is preserved, but not enabled automatically. This makes it possible to build shared libraries without position independent code on architectures this is possible, i.e. on x86. If type is pie the code generation is configured to produce -fpie executable. This result in similar optimizations as exec except that -fpie is not disabled if specified at compilation time. If type is rel the compiler assumes that incremental linking is done. The sections containing intermediate code for link- time optimization are merged, pre-optimized, and output to the resulting object file. In addition, if -ffat-lto-objects is specified the binary code is produced for future non-lto linking. The object file produced by incremental linking will be smaller than a static library produced from the same object files. At link-time the result of incremental linking will also load faster to compiler than a static library assuming that majority of objects in the library are used. Finally nolto-rel configure compiler to for incremental linking where code generation is forced, final binary is produced and the intermediate code for later link-time optimization is stripped. When multiple object files are linked together the resulting code will be optimized better than with link time optimizations disabled (for example, the cross-module inlining will happen), most of benefits of whole program optimizations are however lost. During the incremental link (by -r) the linker plugin will default to rel. With current interfaces to GNU Binutils it is however not possible to link incrementally LTO objects and non-LTO objects into a single mixed object file. In the case any of object files in incremental link cannot be used for link-time optimization the linker plugin will output warning and use nolto-rel. To maintain the whole program optimization it is recommended to link such objects into static library instead. Alternatively it is possible to use H.J. Lu's binutils with support for mixed objects. -fuse-ld=bfd Use the bfd linker instead of the default linker. -fuse-ld=gold Use the gold linker instead of the default linker. -fuse-ld=lld Use the LLVM lld linker instead of the default linker. -llibrary -l library Search the library named library when linking. (The second alternative with the library as a separate argument is only for POSIX compliance and is not recommended.) The -l option is passed directly to the linker by GCC. Refer to your linker documentation for exact details. The general description below applies to the GNU linker. The linker searches a standard list of directories for the library. The directories searched include several standard system directories plus any that you specify with -L. Static libraries are archives of object files, and have file names like liblibrary.a. Some targets also support shared libraries, which typically have names like liblibrary.so. If both static and shared libraries are found, the linker gives preference to linking with the shared library unless the -static option is used. It makes a difference where in the command you write this option; the linker searches and processes libraries and object files in the order they are specified. Thus, foo.o -lz bar.o searches library z after file foo.o but before bar.o. If bar.o refers to functions in z, those functions may not be loaded. -lobjc You need this special case of the -l option in order to link an Objective-C or Objective-C++ program. -nostartfiles Do not use the standard system startup files when linking. The standard system libraries are used normally, unless -nostdlib, -nolibc, or -nodefaultlibs is used. -nodefaultlibs Do not use the standard system libraries when linking. Only the libraries you specify are passed to the linker, and options specifying linkage of the system libraries, such as -static-libgcc or -shared-libgcc, are ignored. The standard startup files are used normally, unless -nostartfiles is used. The compiler may generate calls to "memcmp", "memset", "memcpy" and "memmove". These entries are usually resolved by entries in libc. These entry points should be supplied through some other mechanism when this option is specified. -nolibc Do not use the C library or system libraries tightly coupled with it when linking. Still link with the startup files, libgcc or toolchain provided language support libraries such as libgnat, libgfortran or libstdc++ unless options preventing their inclusion are used as well. This typically removes -lc from the link command line, as well as system libraries that normally go with it and become meaningless when absence of a C library is assumed, for example -lpthread or -lm in some configurations. This is intended for bare- board targets when there is indeed no C library available. -nostdlib Do not use the standard system startup files or libraries when linking. No startup files and only the libraries you specify are passed to the linker, and options specifying linkage of the system libraries, such as -static-libgcc or -shared-libgcc, are ignored. The compiler may generate calls to "memcmp", "memset", "memcpy" and "memmove". These entries are usually resolved by entries in libc. These entry points should be supplied through some other mechanism when this option is specified. One of the standard libraries bypassed by -nostdlib and -nodefaultlibs is libgcc.a, a library of internal subroutines which GCC uses to overcome shortcomings of particular machines, or special needs for some languages. In most cases, you need libgcc.a even when you want to avoid other standard libraries. In other words, when you specify -nostdlib or -nodefaultlibs you should usually specify -lgcc as well. This ensures that you have no unresolved references to internal GCC library subroutines. (An example of such an internal subroutine is "__main", used to ensure C++ constructors are called.) -e entry --entry=entry Specify that the program entry point is entry. The argument is interpreted by the linker; the GNU linker accepts either a symbol name or an address. -pie Produce a dynamically linked position independent executable on targets that support it. For predictable results, you must also specify the same set of options used for compilation (-fpie, -fPIE, or model suboptions) when you specify this linker option. -no-pie Don't produce a dynamically linked position independent executable. -static-pie Produce a static position independent executable on targets that support it. A static position independent executable is similar to a static executable, but can be loaded at any address without a dynamic linker. For predictable results, you must also specify the same set of options used for compilation (-fpie, -fPIE, or model suboptions) when you specify this linker option. -pthread Link with the POSIX threads library. This option is supported on GNU/Linux targets, most other Unix derivatives, and also on x86 Cygwin and MinGW targets. On some targets this option also sets flags for the preprocessor, so it should be used consistently for both compilation and linking. -r Produce a relocatable object as output. This is also known as partial linking. -rdynamic Pass the flag -export-dynamic to the ELF linker, on targets that support it. This instructs the linker to add all symbols, not only used ones, to the dynamic symbol table. This option is needed for some uses of "dlopen" or to allow obtaining backtraces from within a program. -s Remove all symbol table and relocation information from the executable. -static On systems that support dynamic linking, this overrides -pie and prevents linking with the shared libraries. On other systems, this option has no effect. -shared Produce a shared object which can then be linked with other objects to form an executable. Not all systems support this option. For predictable results, you must also specify the same set of options used for compilation (-fpic, -fPIC, or model suboptions) when you specify this linker option.[1] -shared-libgcc -static-libgcc On systems that provide libgcc as a shared library, these options force the use of either the shared or static version, respectively. If no shared version of libgcc was built when the compiler was configured, these options have no effect. There are several situations in which an application should use the shared libgcc instead of the static version. The most common of these is when the application wishes to throw and catch exceptions across different shared libraries. In that case, each of the libraries as well as the application itself should use the shared libgcc. Therefore, the G++ driver automatically adds -shared-libgcc whenever you build a shared library or a main executable, because C++ programs typically use exceptions, so this is the right thing to do. If, instead, you use the GCC driver to create shared libraries, you may find that they are not always linked with the shared libgcc. If GCC finds, at its configuration time, that you have a non-GNU linker or a GNU linker that does not support option --eh-frame-hdr, it links the shared version of libgcc into shared libraries by default. Otherwise, it takes advantage of the linker and optimizes away the linking with the shared version of libgcc, linking with the static version of libgcc by default. This allows exceptions to propagate through such shared libraries, without incurring relocation costs at library load time. However, if a library or main executable is supposed to throw or catch exceptions, you must link it using the G++ driver, or using the option -shared-libgcc, such that it is linked with the shared libgcc. -static-libasan When the -fsanitize=address option is used to link a program, the GCC driver automatically links against libasan. If libasan is available as a shared library, and the -static option is not used, then this links against the shared version of libasan. The -static-libasan option directs the GCC driver to link libasan statically, without necessarily linking other libraries statically. -static-libtsan When the -fsanitize=thread option is used to link a program, the GCC driver automatically links against libtsan. If libtsan is available as a shared library, and the -static option is not used, then this links against the shared version of libtsan. The -static-libtsan option directs the GCC driver to link libtsan statically, without necessarily linking other libraries statically. -static-liblsan When the -fsanitize=leak option is used to link a program, the GCC driver automatically links against liblsan. If liblsan is available as a shared library, and the -static option is not used, then this links against the shared version of liblsan. The -static-liblsan option directs the GCC driver to link liblsan statically, without necessarily linking other libraries statically. -static-libubsan When the -fsanitize=undefined option is used to link a program, the GCC driver automatically links against libubsan. If libubsan is available as a shared library, and the -static option is not used, then this links against the shared version of libubsan. The -static-libubsan option directs the GCC driver to link libubsan statically, without necessarily linking other libraries statically. -static-libstdc++ When the g++ program is used to link a C++ program, it normally automatically links against libstdc++. If libstdc++ is available as a shared library, and the -static option is not used, then this links against the shared version of libstdc++. That is normally fine. However, it is sometimes useful to freeze the version of libstdc++ used by the program without going all the way to a fully static link. The -static-libstdc++ option directs the g++ driver to link libstdc++ statically, without necessarily linking other libraries statically. -symbolic Bind references to global symbols when building a shared object. Warn about any unresolved references (unless overridden by the link editor option -Xlinker -z -Xlinker defs). Only a few systems support this option. -T script Use script as the linker script. This option is supported by most systems using the GNU linker. On some targets, such as bare-board targets without an operating system, the -T option may be required when linking to avoid references to undefined symbols. -Xlinker option Pass option as an option to the linker. You can use this to supply system-specific linker options that GCC does not recognize. If you want to pass an option that takes a separate argument, you must use -Xlinker twice, once for the option and once for the argument. For example, to pass -assert definitions, you must write -Xlinker -assert -Xlinker definitions. It does not work to write -Xlinker "-assert definitions", because this passes the entire string as a single argument, which is not what the linker expects. When using the GNU linker, it is usually more convenient to pass arguments to linker options using the option=value syntax than as separate arguments. For example, you can specify -Xlinker -Map=output.map rather than -Xlinker -Map -Xlinker output.map. Other linkers may not support this syntax for command-line options. -Wl,option Pass option as an option to the linker. If option contains commas, it is split into multiple options at the commas. You can use this syntax to pass an argument to the option. For example, -Wl,-Map,output.map passes -Map output.map to the linker. When using the GNU linker, you can also get the same effect with -Wl,-Map=output.map. -u symbol Pretend the symbol symbol is undefined, to force linking of library modules to define it. You can use -u multiple times with different symbols to force loading of additional library modules. -z keyword -z is passed directly on to the linker along with the keyword keyword. See the section in the documentation of your linker for permitted values and their meanings. Options for Directory Search These options specify directories to search for header files, for libraries and for parts of the compiler: -I dir -iquote dir -isystem dir -idirafter dir Add the directory dir to the list of directories to be searched for header files during preprocessing. If dir begins with = or $SYSROOT, then the = or $SYSROOT is replaced by the sysroot prefix; see --sysroot and -isysroot. Directories specified with -iquote apply only to the quote form of the directive, "#include "file"". Directories specified with -I, -isystem, or -idirafter apply to lookup for both the "#include "file"" and "#include <file>" directives. You can specify any number or combination of these options on the command line to search for header files in several directories. The lookup order is as follows: 1. For the quote form of the include directive, the directory of the current file is searched first. 2. For the quote form of the include directive, the directories specified by -iquote options are searched in left-to-right order, as they appear on the command line. 3. Directories specified with -I options are scanned in left-to-right order. 4. Directories specified with -isystem options are scanned in left-to-right order. 5. Standard system directories are scanned. 6. Directories specified with -idirafter options are scanned in left-to-right order. You can use -I to override a system header file, substituting your own version, since these directories are searched before the standard system header file directories. However, you should not use this option to add directories that contain vendor-supplied system header files; use -isystem for that. The -isystem and -idirafter options also mark the directory as a system directory, so that it gets the same special treatment that is applied to the standard system directories. If a standard system include directory, or a directory specified with -isystem, is also specified with -I, the -I option is ignored. The directory is still searched but as a system directory at its normal position in the system include chain. This is to ensure that GCC's procedure to fix buggy system headers and the ordering for the "#include_next" directive are not inadvertently changed. If you really need to change the search order for system directories, use the -nostdinc and/or -isystem options. -I- Split the include path. This option has been deprecated. Please use -iquote instead for -I directories before the -I- and remove the -I- option. Any directories specified with -I options before -I- are searched only for headers requested with "#include "file""; they are not searched for "#include <file>". If additional directories are specified with -I options after the -I-, those directories are searched for all #include directives. In addition, -I- inhibits the use of the directory of the current file directory as the first search directory for "#include "file"". There is no way to override this effect of -I-. -iprefix prefix Specify prefix as the prefix for subsequent -iwithprefix options. If the prefix represents a directory, you should include the final /. -iwithprefix dir -iwithprefixbefore dir Append dir to the prefix specified previously with -iprefix, and add the resulting directory to the include search path. -iwithprefixbefore puts it in the same place -I would; -iwithprefix puts it where -idirafter would. -isysroot dir This option is like the --sysroot option, but applies only to header files (except for Darwin targets, where it applies to both header files and libraries). See the --sysroot option for more information. -imultilib dir Use dir as a subdirectory of the directory containing target- specific C++ headers. -nostdinc Do not search the standard system directories for header files. Only the directories explicitly specified with -I, -iquote, -isystem, and/or -idirafter options (and the directory of the current file, if appropriate) are searched. -nostdinc++ Do not search for header files in the C++-specific standard directories, but do still search the other standard directories. (This option is used when building the C++ library.) -iplugindir=dir Set the directory to search for plugins that are passed by -fplugin=name instead of -fplugin=path/name.so. This option is not meant to be used by the user, but only passed by the driver. -Ldir Add directory dir to the list of directories to be searched for -l. -Bprefix This option specifies where to find the executables, libraries, include files, and data files of the compiler itself. The compiler driver program runs one or more of the subprograms cpp, cc1, as and ld. It tries prefix as a prefix for each program it tries to run, both with and without machine/version/ for the corresponding target machine and compiler version. For each subprogram to be run, the compiler driver first tries the -B prefix, if any. If that name is not found, or if -B is not specified, the driver tries two standard prefixes, /usr/lib/gcc/ and /usr/local/lib/gcc/. If neither of those results in a file name that is found, the unmodified program name is searched for using the directories specified in your PATH environment variable. The compiler checks to see if the path provided by -B refers to a directory, and if necessary it adds a directory separator character at the end of the path. -B prefixes that effectively specify directory names also apply to libraries in the linker, because the compiler translates these options into -L options for the linker. They also apply to include files in the preprocessor, because the compiler translates these options into -isystem options for the preprocessor. In this case, the compiler appends include to the prefix. The runtime support file libgcc.a can also be searched for using the -B prefix, if needed. If it is not found there, the two standard prefixes above are tried, and that is all. The file is left out of the link if it is not found by those means. Another way to specify a prefix much like the -B prefix is to use the environment variable GCC_EXEC_PREFIX. As a special kludge, if the path provided by -B is [dir/]stageN/, where N is a number in the range 0 to 9, then it is replaced by [dir/]include. This is to help with boot- strapping the compiler. -no-canonical-prefixes Do not expand any symbolic links, resolve references to /../ or /./, or make the path absolute when generating a relative prefix. --sysroot=dir Use dir as the logical root directory for headers and libraries. For example, if the compiler normally searches for headers in /usr/include and libraries in /usr/lib, it instead searches dir/usr/include and dir/usr/lib. If you use both this option and the -isysroot option, then the --sysroot option applies to libraries, but the -isysroot option applies to header files. The GNU linker (beginning with version 2.16) has the necessary support for this option. If your linker does not support this option, the header file aspect of --sysroot still works, but the library aspect does not. --no-sysroot-suffix For some targets, a suffix is added to the root directory specified with --sysroot, depending on the other options used, so that headers may for example be found in dir/suffix/usr/include instead of dir/usr/include. This option disables the addition of such a suffix. Options for Code Generation Conventions These machine-independent options control the interface conventions used in code generation. Most of them have both positive and negative forms; the negative form of -ffoo is -fno-foo. In the table below, only one of the forms is listed---the one that is not the default. You can figure out the other form by either removing no- or adding it. -fstack-reuse=reuse-level This option controls stack space reuse for user declared local/auto variables and compiler generated temporaries. reuse_level can be all, named_vars, or none. all enables stack reuse for all local variables and temporaries, named_vars enables the reuse only for user defined local variables with names, and none disables stack reuse completely. The default value is all. The option is needed when the program extends the lifetime of a scoped local variable or a compiler generated temporary beyond the end point defined by the language. When a lifetime of a variable ends, and if the variable lives in memory, the optimizing compiler has the freedom to reuse its stack space with other temporaries or scoped local variables whose live range does not overlap with it. Legacy code extending local lifetime is likely to break with the stack reuse optimization. For example, int *p; { int local1; p = &local1; local1 = 10; .... } { int local2; local2 = 20; ... } if (*p == 10) // out of scope use of local1 { } Another example: struct A { A(int k) : i(k), j(k) { } int i; int j; }; A *ap; void foo(const A& ar) { ap = &ar; } void bar() { foo(A(10)); // temp object's lifetime ends when foo returns { A a(20); .... } ap->i+= 10; // ap references out of scope temp whose space // is reused with a. What is the value of ap->i? } The lifetime of a compiler generated temporary is well defined by the C++ standard. When a lifetime of a temporary ends, and if the temporary lives in memory, the optimizing compiler has the freedom to reuse its stack space with other temporaries or scoped local variables whose live range does not overlap with it. However some of the legacy code relies on the behavior of older compilers in which temporaries' stack space is not reused, the aggressive stack reuse can lead to runtime errors. This option is used to control the temporary stack reuse optimization. -ftrapv This option generates traps for signed overflow on addition, subtraction, multiplication operations. The options -ftrapv and -fwrapv override each other, so using -ftrapv -fwrapv on the command-line results in -fwrapv being effective. Note that only active options override, so using -ftrapv -fwrapv -fno-wrapv on the command-line results in -ftrapv being effective. -fwrapv This option instructs the compiler to assume that signed arithmetic overflow of addition, subtraction and multiplication wraps around using twos-complement representation. This flag enables some optimizations and disables others. The options -ftrapv and -fwrapv override each other, so using -ftrapv -fwrapv on the command-line results in -fwrapv being effective. Note that only active options override, so using -ftrapv -fwrapv -fno-wrapv on the command-line results in -ftrapv being effective. -fwrapv-pointer This option instructs the compiler to assume that pointer arithmetic overflow on addition and subtraction wraps around using twos-complement representation. This flag disables some optimizations which assume pointer overflow is invalid. -fstrict-overflow This option implies -fno-wrapv -fno-wrapv-pointer and when negated implies -fwrapv -fwrapv-pointer. -fexceptions Enable exception handling. Generates extra code needed to propagate exceptions. For some targets, this implies GCC generates frame unwind information for all functions, which can produce significant data size overhead, although it does not affect execution. If you do not specify this option, GCC enables it by default for languages like C++ that normally require exception handling, and disables it for languages like C that do not normally require it. However, you may need to enable this option when compiling C code that needs to interoperate properly with exception handlers written in C++. You may also wish to disable this option if you are compiling older C++ programs that don't use exception handling. -fnon-call-exceptions Generate code that allows trapping instructions to throw exceptions. Note that this requires platform-specific runtime support that does not exist everywhere. Moreover, it only allows trapping instructions to throw exceptions, i.e. memory references or floating-point instructions. It does not allow exceptions to be thrown from arbitrary signal handlers such as "SIGALRM". -fdelete-dead-exceptions Consider that instructions that may throw exceptions but don't otherwise contribute to the execution of the program can be optimized away. This option is enabled by default for the Ada front end, as permitted by the Ada language specification. Optimization passes that cause dead exceptions to be removed are enabled independently at different optimization levels. -funwind-tables Similar to -fexceptions, except that it just generates any needed static data, but does not affect the generated code in any other way. You normally do not need to enable this option; instead, a language processor that needs this handling enables it on your behalf. -fasynchronous-unwind-tables Generate unwind table in DWARF format, if supported by target machine. The table is exact at each instruction boundary, so it can be used for stack unwinding from asynchronous events (such as debugger or garbage collector). -fno-gnu-unique On systems with recent GNU assembler and C library, the C++ compiler uses the "STB_GNU_UNIQUE" binding to make sure that definitions of template static data members and static local variables in inline functions are unique even in the presence of "RTLD_LOCAL"; this is necessary to avoid problems with a library used by two different "RTLD_LOCAL" plugins depending on a definition in one of them and therefore disagreeing with the other one about the binding of the symbol. But this causes "dlclose" to be ignored for affected DSOs; if your program relies on reinitialization of a DSO via "dlclose" and "dlopen", you can use -fno-gnu-unique. -fpcc-struct-return Return "short" "struct" and "union" values in memory like longer ones, rather than in registers. This convention is less efficient, but it has the advantage of allowing intercallability between GCC-compiled files and files compiled with other compilers, particularly the Portable C Compiler (pcc). The precise convention for returning structures in memory depends on the target configuration macros. Short structures and unions are those whose size and alignment match that of some integer type. Warning: code compiled with the -fpcc-struct-return switch is not binary compatible with code compiled with the -freg-struct-return switch. Use it to conform to a non- default application binary interface. -freg-struct-return Return "struct" and "union" values in registers when possible. This is more efficient for small structures than -fpcc-struct-return. If you specify neither -fpcc-struct-return nor -freg-struct-return, GCC defaults to whichever convention is standard for the target. If there is no standard convention, GCC defaults to -fpcc-struct-return, except on targets where GCC is the principal compiler. In those cases, we can choose the standard, and we chose the more efficient register return alternative. Warning: code compiled with the -freg-struct-return switch is not binary compatible with code compiled with the -fpcc-struct-return switch. Use it to conform to a non- default application binary interface. -fshort-enums Allocate to an "enum" type only as many bytes as it needs for the declared range of possible values. Specifically, the "enum" type is equivalent to the smallest integer type that has enough room. Warning: the -fshort-enums switch causes GCC to generate code that is not binary compatible with code generated without that switch. Use it to conform to a non-default application binary interface. -fshort-wchar Override the underlying type for "wchar_t" to be "short unsigned int" instead of the default for the target. This option is useful for building programs to run under WINE. Warning: the -fshort-wchar switch causes GCC to generate code that is not binary compatible with code generated without that switch. Use it to conform to a non-default application binary interface. -fno-common In C code, this option controls the placement of global variables defined without an initializer, known as tentative definitions in the C standard. Tentative definitions are distinct from declarations of a variable with the "extern" keyword, which do not allocate storage. Unix C compilers have traditionally allocated storage for uninitialized global variables in a common block. This allows the linker to resolve all tentative definitions of the same variable in different compilation units to the same object, or to a non-tentative definition. This is the behavior specified by -fcommon, and is the default for GCC on most targets. On the other hand, this behavior is not required by ISO C, and on some targets may carry a speed or code size penalty on variable references. The -fno-common option specifies that the compiler should instead place uninitialized global variables in the BSS section of the object file. This inhibits the merging of tentative definitions by the linker so you get a multiple- definition error if the same variable is defined in more than one compilation unit. Compiling with -fno-common is useful on targets for which it provides better performance, or if you wish to verify that the program will work on other systems that always treat uninitialized variable definitions this way. -fno-ident Ignore the "#ident" directive. -finhibit-size-directive Don't output a ".size" assembler directive, or anything else that would cause trouble if the function is split in the middle, and the two halves are placed at locations far apart in memory. This option is used when compiling crtstuff.c; you should not need to use it for anything else. -fverbose-asm Put extra commentary information in the generated assembly code to make it more readable. This option is generally only of use to those who actually need to read the generated assembly code (perhaps while debugging the compiler itself). -fno-verbose-asm, the default, causes the extra information to be omitted and is useful when comparing two assembler files. The added comments include: * information on the compiler version and command-line options, * the source code lines associated with the assembly instructions, in the form FILENAME:LINENUMBER:CONTENT OF LINE, * hints on which high-level expressions correspond to the various assembly instruction operands. For example, given this C source file: int test (int n) { int i; int total = 0; for (i = 0; i < n; i++) total += i * i; return total; } compiling to (x86_64) assembly via -S and emitting the result direct to stdout via -o - gcc -S test.c -fverbose-asm -Os -o - gives output similar to this: .file "test.c" # GNU C11 (GCC) version 7.0.0 20160809 (experimental) (x86_64-pc-linux-gnu) [...snip...] # options passed: [...snip...] .text .globl test .type test, @function test: .LFB0: .cfi_startproc # test.c:4: int total = 0; xorl %eax, %eax # <retval> # test.c:6: for (i = 0; i < n; i++) xorl %edx, %edx # i .L2: # test.c:6: for (i = 0; i < n; i++) cmpl %edi, %edx # n, i jge .L5 #, # test.c:7: total += i * i; movl %edx, %ecx # i, tmp92 imull %edx, %ecx # i, tmp92 # test.c:6: for (i = 0; i < n; i++) incl %edx # i # test.c:7: total += i * i; addl %ecx, %eax # tmp92, <retval> jmp .L2 # .L5: # test.c:10: } ret .cfi_endproc .LFE0: .size test, .-test .ident "GCC: (GNU) 7.0.0 20160809 (experimental)" .section .note.GNU-stack,"",@progbits The comments are intended for humans rather than machines and hence the precise format of the comments is subject to change. -frecord-gcc-switches This switch causes the command line used to invoke the compiler to be recorded into the object file that is being created. This switch is only implemented on some targets and the exact format of the recording is target and binary file format dependent, but it usually takes the form of a section containing ASCII text. This switch is related to the -fverbose-asm switch, but that switch only records information in the assembler output file as comments, so it never reaches the object file. See also -grecord-gcc-switches for another way of storing compiler options into the object file. -fpic Generate position-independent code (PIC) suitable for use in a shared library, if supported for the target machine. Such code accesses all constant addresses through a global offset table (GOT). The dynamic loader resolves the GOT entries when the program starts (the dynamic loader is not part of GCC; it is part of the operating system). If the GOT size for the linked executable exceeds a machine-specific maximum size, you get an error message from the linker indicating that -fpic does not work; in that case, recompile with -fPIC instead. (These maximums are 8k on the SPARC, 28k on AArch64 and 32k on the m68k and RS/6000. The x86 has no such limit.) Position-independent code requires special support, and therefore works only on certain machines. For the x86, GCC supports PIC for System V but not for the Sun 386i. Code generated for the IBM RS/6000 is always position-independent. When this flag is set, the macros "__pic__" and "__PIC__" are defined to 1. -fPIC If supported for the target machine, emit position- independent code, suitable for dynamic linking and avoiding any limit on the size of the global offset table. This option makes a difference on AArch64, m68k, PowerPC and SPARC. Position-independent code requires special support, and therefore works only on certain machines. When this flag is set, the macros "__pic__" and "__PIC__" are defined to 2. -fpie -fPIE These options are similar to -fpic and -fPIC, but the generated position-independent code can be only linked into executables. Usually these options are used to compile code that will be linked using the -pie GCC option. -fpie and -fPIE both define the macros "__pie__" and "__PIE__". The macros have the value 1 for -fpie and 2 for -fPIE. -fno-plt Do not use the PLT for external function calls in position- independent code. Instead, load the callee address at call sites from the GOT and branch to it. This leads to more efficient code by eliminating PLT stubs and exposing GOT loads to optimizations. On architectures such as 32-bit x86 where PLT stubs expect the GOT pointer in a specific register, this gives more register allocation freedom to the compiler. Lazy binding requires use of the PLT; with -fno-plt all external symbols are resolved at load time. Alternatively, the function attribute "noplt" can be used to avoid calls through the PLT for specific external functions. In position-dependent code, a few targets also convert calls to functions that are marked to not use the PLT to use the GOT instead. -fno-jump-tables Do not use jump tables for switch statements even where it would be more efficient than other code generation strategies. This option is of use in conjunction with -fpic or -fPIC for building code that forms part of a dynamic linker and cannot reference the address of a jump table. On some targets, jump tables do not require a GOT and this option is not needed. -ffixed-reg Treat the register named reg as a fixed register; generated code should never refer to it (except perhaps as a stack pointer, frame pointer or in some other fixed role). reg must be the name of a register. The register names accepted are machine-specific and are defined in the "REGISTER_NAMES" macro in the machine description macro file. This flag does not have a negative form, because it specifies a three-way choice. -fcall-used-reg Treat the register named reg as an allocable register that is clobbered by function calls. It may be allocated for temporaries or variables that do not live across a call. Functions compiled this way do not save and restore the register reg. It is an error to use this flag with the frame pointer or stack pointer. Use of this flag for other registers that have fixed pervasive roles in the machine's execution model produces disastrous results. This flag does not have a negative form, because it specifies a three-way choice. -fcall-saved-reg Treat the register named reg as an allocable register saved by functions. It may be allocated even for temporaries or variables that live across a call. Functions compiled this way save and restore the register reg if they use it. It is an error to use this flag with the frame pointer or stack pointer. Use of this flag for other registers that have fixed pervasive roles in the machine's execution model produces disastrous results. A different sort of disaster results from the use of this flag for a register in which function values may be returned. This flag does not have a negative form, because it specifies a three-way choice. -fpack-struct[=n] Without a value specified, pack all structure members together without holes. When a value is specified (which must be a small power of two), pack structure members according to this value, representing the maximum alignment (that is, objects with default alignment requirements larger than this are output potentially unaligned at the next fitting location. Warning: the -fpack-struct switch causes GCC to generate code that is not binary compatible with code generated without that switch. Additionally, it makes the code suboptimal. Use it to conform to a non-default application binary interface. -fleading-underscore This option and its counterpart, -fno-leading-underscore, forcibly change the way C symbols are represented in the object file. One use is to help link with legacy assembly code. Warning: the -fleading-underscore switch causes GCC to generate code that is not binary compatible with code generated without that switch. Use it to conform to a non- default application binary interface. Not all targets provide complete support for this switch. -ftls-model=model Alter the thread-local storage model to be used. The model argument should be one of global-dynamic, local-dynamic, initial-exec or local-exec. Note that the choice is subject to optimization: the compiler may use a more efficient model for symbols not visible outside of the translation unit, or if -fpic is not given on the command line. The default without -fpic is initial-exec; with -fpic the default is global-dynamic. -ftrampolines For targets that normally need trampolines for nested functions, always generate them instead of using descriptors. Otherwise, for targets that do not need them, like for example HP-PA or IA-64, do nothing. A trampoline is a small piece of code that is created at run time on the stack when the address of a nested function is taken, and is used to call the nested function indirectly. Therefore, it requires the stack to be made executable in order for the program to work properly. -fno-trampolines is enabled by default on a language by language basis to let the compiler avoid generating them, if it computes that this is safe, and replace them with descriptors. Descriptors are made up of data only, but the generated code must be prepared to deal with them. As of this writing, -fno-trampolines is enabled by default only for Ada. Moreover, code compiled with -ftrampolines and code compiled with -fno-trampolines are not binary compatible if nested functions are present. This option must therefore be used on a program-wide basis and be manipulated with extreme care. -fvisibility=[default|internal|hidden|protected] Set the default ELF image symbol visibility to the specified option---all symbols are marked with this unless overridden within the code. Using this feature can very substantially improve linking and load times of shared object libraries, produce more optimized code, provide near-perfect API export and prevent symbol clashes. It is strongly recommended that you use this in any shared objects you distribute. Despite the nomenclature, default always means public; i.e., available to be linked against from outside the shared object. protected and internal are pretty useless in real- world usage so the only other commonly used option is hidden. The default if -fvisibility isn't specified is default, i.e., make every symbol public. A good explanation of the benefits offered by ensuring ELF symbols have the correct visibility is given by "How To Write Shared Libraries" by Ulrich Drepper (which can be found at <https://www.akkadia.org/drepper/ >)---however a superior solution made possible by this option to marking things hidden when the default is public is to make the default hidden and mark things public. This is the norm with DLLs on Windows and with -fvisibility=hidden and "__attribute__ ((visibility("default")))" instead of "__declspec(dllexport)" you get almost identical semantics with identical syntax. This is a great boon to those working with cross-platform projects. For those adding visibility support to existing code, you may find "#pragma GCC visibility" of use. This works by you enclosing the declarations you wish to set visibility for with (for example) "#pragma GCC visibility push(hidden)" and "#pragma GCC visibility pop". Bear in mind that symbol visibility should be viewed as part of the API interface contract and thus all new code should always specify visibility when it is not the default; i.e., declarations only for use within the local DSO should always be marked explicitly as hidden as so to avoid PLT indirection overheads---making this abundantly clear also aids readability and self-documentation of the code. Note that due to ISO C++ specification requirements, "operator new" and "operator delete" must always be of default visibility. Be aware that headers from outside your project, in particular system headers and headers from any other library you use, may not be expecting to be compiled with visibility other than the default. You may need to explicitly say "#pragma GCC visibility push(default)" before including any such headers. "extern" declarations are not affected by -fvisibility, so a lot of code can be recompiled with -fvisibility=hidden with no modifications. However, this means that calls to "extern" functions with no explicit visibility use the PLT, so it is more effective to use "__attribute ((visibility))" and/or "#pragma GCC visibility" to tell the compiler which "extern" declarations should be treated as hidden. Note that -fvisibility does affect C++ vague linkage entities. This means that, for instance, an exception class that is be thrown between DSOs must be explicitly marked with default visibility so that the type_info nodes are unified between the DSOs. An overview of these techniques, their benefits and how to use them is at <http://gcc.gnu.org/wiki/Visibility >. -fstrict-volatile-bitfields This option should be used if accesses to volatile bit-fields (or other structure fields, although the compiler usually honors those types anyway) should use a single access of the width of the field's type, aligned to a natural alignment if possible. For example, targets with memory-mapped peripheral registers might require all such accesses to be 16 bits wide; with this flag you can declare all peripheral bit-fields as "unsigned short" (assuming short is 16 bits on these targets) to force GCC to use 16-bit accesses instead of, perhaps, a more efficient 32-bit access. If this option is disabled, the compiler uses the most efficient instruction. In the previous example, that might be a 32-bit load instruction, even though that accesses bytes that do not contain any portion of the bit-field, or memory- mapped registers unrelated to the one being updated. In some cases, such as when the "packed" attribute is applied to a structure field, it may not be possible to access the field with a single read or write that is correctly aligned for the target machine. In this case GCC falls back to generating multiple accesses rather than code that will fault or truncate the result at run time. Note: Due to restrictions of the C/C++11 memory model, write accesses are not allowed to touch non bit-field members. It is therefore recommended to define all bits of the field's type as bit-field members. The default value of this option is determined by the application binary interface for the target processor. -fsync-libcalls This option controls whether any out-of-line instance of the "__sync" family of functions may be used to implement the C++11 "__atomic" family of functions. The default value of this option is enabled, thus the only useful form of the option is -fno-sync-libcalls. This option is used in the implementation of the libatomic runtime library. GCC Developer Options This section describes command-line options that are primarily of interest to GCC developers, including options to support compiler testing and investigation of compiler bugs and compile-time performance problems. This includes options that produce debug dumps at various points in the compilation; that print statistics such as memory use and execution time; and that print information about GCC's configuration, such as where it searches for libraries. You should rarely need to use any of these options for ordinary compilation and linking tasks. Many developer options that cause GCC to dump output to a file take an optional =filename suffix. You can specify stdout or - to dump to standard output, and stderr for standard error. If =filename is omitted, a default dump file name is constructed by concatenating the base dump file name, a pass number, phase letter, and pass name. The base dump file name is the name of output file produced by the compiler if explicitly specified and not an executable; otherwise it is the source file name. The pass number is determined by the order passes are registered with the compiler's pass manager. This is generally the same as the order of execution, but passes registered by plugins, target- specific passes, or passes that are otherwise registered late are numbered higher than the pass named final, even if they are executed earlier. The phase letter is one of i (inter-procedural analysis), l (language-specific), r (RTL), or t (tree). The files are created in the directory of the output file. -dletters -fdump-rtl-pass -fdump-rtl-pass=filename Says to make debugging dumps during compilation at times specified by letters. This is used for debugging the RTL- based passes of the compiler. Some -dletters switches have different meaning when -E is used for preprocessing. Debug dumps can be enabled with a -fdump-rtl switch or some -d option letters. Here are the possible letters for use in pass and letters, and their meanings: -fdump-rtl-alignments Dump after branch alignments have been computed. -fdump-rtl-asmcons Dump after fixing rtl statements that have unsatisfied in/out constraints. -fdump-rtl-auto_inc_dec Dump after auto-inc-dec discovery. This pass is only run on architectures that have auto inc or auto dec instructions. -fdump-rtl-barriers Dump after cleaning up the barrier instructions. -fdump-rtl-bbpart Dump after partitioning hot and cold basic blocks. -fdump-rtl-bbro Dump after block reordering. -fdump-rtl-btl1 -fdump-rtl-btl2 -fdump-rtl-btl1 and -fdump-rtl-btl2 enable dumping after the two branch target load optimization passes. -fdump-rtl-bypass Dump after jump bypassing and control flow optimizations. -fdump-rtl-combine Dump after the RTL instruction combination pass. -fdump-rtl-compgotos Dump after duplicating the computed gotos. -fdump-rtl-ce1 -fdump-rtl-ce2 -fdump-rtl-ce3 -fdump-rtl-ce1, -fdump-rtl-ce2, and -fdump-rtl-ce3 enable dumping after the three if conversion passes. -fdump-rtl-cprop_hardreg Dump after hard register copy propagation. -fdump-rtl-csa Dump after combining stack adjustments. -fdump-rtl-cse1 -fdump-rtl-cse2 -fdump-rtl-cse1 and -fdump-rtl-cse2 enable dumping after the two common subexpression elimination passes. -fdump-rtl-dce Dump after the standalone dead code elimination passes. -fdump-rtl-dbr Dump after delayed branch scheduling. -fdump-rtl-dce1 -fdump-rtl-dce2 -fdump-rtl-dce1 and -fdump-rtl-dce2 enable dumping after the two dead store elimination passes. -fdump-rtl-eh Dump after finalization of EH handling code. -fdump-rtl-eh_ranges Dump after conversion of EH handling range regions. -fdump-rtl-expand Dump after RTL generation. -fdump-rtl-fwprop1 -fdump-rtl-fwprop2 -fdump-rtl-fwprop1 and -fdump-rtl-fwprop2 enable dumping after the two forward propagation passes. -fdump-rtl-gcse1 -fdump-rtl-gcse2 -fdump-rtl-gcse1 and -fdump-rtl-gcse2 enable dumping after global common subexpression elimination. -fdump-rtl-init-regs Dump after the initialization of the registers. -fdump-rtl-initvals Dump after the computation of the initial value sets. -fdump-rtl-into_cfglayout Dump after converting to cfglayout mode. -fdump-rtl-ira Dump after iterated register allocation. -fdump-rtl-jump Dump after the second jump optimization. -fdump-rtl-loop2 -fdump-rtl-loop2 enables dumping after the rtl loop optimization passes. -fdump-rtl-mach Dump after performing the machine dependent reorganization pass, if that pass exists. -fdump-rtl-mode_sw Dump after removing redundant mode switches. -fdump-rtl-rnreg Dump after register renumbering. -fdump-rtl-outof_cfglayout Dump after converting from cfglayout mode. -fdump-rtl-peephole2 Dump after the peephole pass. -fdump-rtl-postreload Dump after post-reload optimizations. -fdump-rtl-pro_and_epilogue Dump after generating the function prologues and epilogues. -fdump-rtl-sched1 -fdump-rtl-sched2 -fdump-rtl-sched1 and -fdump-rtl-sched2 enable dumping after the basic block scheduling passes. -fdump-rtl-ree Dump after sign/zero extension elimination. -fdump-rtl-seqabstr Dump after common sequence discovery. -fdump-rtl-shorten Dump after shortening branches. -fdump-rtl-sibling Dump after sibling call optimizations. -fdump-rtl-split1 -fdump-rtl-split2 -fdump-rtl-split3 -fdump-rtl-split4 -fdump-rtl-split5 These options enable dumping after five rounds of instruction splitting. -fdump-rtl-sms Dump after modulo scheduling. This pass is only run on some architectures. -fdump-rtl-stack Dump after conversion from GCC's "flat register file" registers to the x87's stack-like registers. This pass is only run on x86 variants. -fdump-rtl-subreg1 -fdump-rtl-subreg2 -fdump-rtl-subreg1 and -fdump-rtl-subreg2 enable dumping after the two subreg expansion passes. -fdump-rtl-unshare Dump after all rtl has been unshared. -fdump-rtl-vartrack Dump after variable tracking. -fdump-rtl-vregs Dump after converting virtual registers to hard registers. -fdump-rtl-web Dump after live range splitting. -fdump-rtl-regclass -fdump-rtl-subregs_of_mode_init -fdump-rtl-subregs_of_mode_finish -fdump-rtl-dfinit -fdump-rtl-dfinish These dumps are defined but always produce empty files. -da -fdump-rtl-all Produce all the dumps listed above. -dA Annotate the assembler output with miscellaneous debugging information. -dD Dump all macro definitions, at the end of preprocessing, in addition to normal output. -dH Produce a core dump whenever an error occurs. -dp Annotate the assembler output with a comment indicating which pattern and alternative is used. The length and cost of each instruction are also printed. -dP Dump the RTL in the assembler output as a comment before each instruction. Also turns on -dp annotation. -dx Just generate RTL for a function instead of compiling it. Usually used with -fdump-rtl-expand. -fdump-debug Dump debugging information generated during the debug generation phase. -fdump-earlydebug Dump debugging information generated during the early debug generation phase. -fdump-noaddr When doing debugging dumps, suppress address output. This makes it more feasible to use diff on debugging dumps for compiler invocations with different compiler binaries and/or different text / bss / data / heap / stack / dso start locations. -freport-bug Collect and dump debug information into a temporary file if an internal compiler error (ICE) occurs. -fdump-unnumbered When doing debugging dumps, suppress instruction numbers and address output. This makes it more feasible to use diff on debugging dumps for compiler invocations with different options, in particular with and without -g. -fdump-unnumbered-links When doing debugging dumps (see -d option above), suppress instruction numbers for the links to the previous and next instructions in a sequence. -fdump-ipa-switch -fdump-ipa-switch-options Control the dumping at various stages of inter-procedural analysis language tree to a file. The file name is generated by appending a switch specific suffix to the source file name, and the file is created in the same directory as the output file. The following dumps are possible: all Enables all inter-procedural analysis dumps. cgraph Dumps information about call-graph optimization, unused function removal, and inlining decisions. inline Dump after function inlining. Additionally, the options -optimized, -missed, -note, and -all can be provided, with the same meaning as for -fopt-info, defaulting to -optimized. For example, -fdump-ipa-inline-optimized-missed will emit information on callsites that were inlined, along with callsites that were not inlined. By default, the dump will contain messages about successful optimizations (equivalent to -optimized) together with low- level details about the analysis. -fdump-lang-all -fdump-lang-switch -fdump-lang-switch-options -fdump-lang-switch-options=filename Control the dumping of language-specific information. The options and filename portions behave as described in the -fdump-tree option. The following switch values are accepted: all Enable all language-specific dumps. class Dump class hierarchy information. Virtual table information is emitted unless 'slim' is specified. This option is applicable to C++ only. raw Dump the raw internal tree data. This option is applicable to C++ only. -fdump-passes Print on stderr the list of optimization passes that are turned on and off by the current command-line options. -fdump-statistics-option Enable and control dumping of pass statistics in a separate file. The file name is generated by appending a suffix ending in .statistics to the source file name, and the file is created in the same directory as the output file. If the -option form is used, -stats causes counters to be summed over the whole compilation unit while -details dumps every event as the passes generate them. The default with no option is to sum counters for each function compiled. -fdump-tree-all -fdump-tree-switch -fdump-tree-switch-options -fdump-tree-switch-options=filename Control the dumping at various stages of processing the intermediate language tree to a file. If the -options form is used, options is a list of - separated options which control the details of the dump. Not all options are applicable to all dumps; those that are not meaningful are ignored. The following options are available address Print the address of each node. Usually this is not meaningful as it changes according to the environment and source file. Its primary use is for tying up a dump file with a debug environment. asmname If "DECL_ASSEMBLER_NAME" has been set for a given decl, use that in the dump instead of "DECL_NAME". Its primary use is ease of use working backward from mangled names in the assembly file. slim When dumping front-end intermediate representations, inhibit dumping of members of a scope or body of a function merely because that scope has been reached. Only dump such items when they are directly reachable by some other path. When dumping pretty-printed trees, this option inhibits dumping the bodies of control structures. When dumping RTL, print the RTL in slim (condensed) form instead of the default LISP-like representation. raw Print a raw representation of the tree. By default, trees are pretty-printed into a C-like representation. details Enable more detailed dumps (not honored by every dump option). Also include information from the optimization passes. stats Enable dumping various statistics about the pass (not honored by every dump option). blocks Enable showing basic block boundaries (disabled in raw dumps). graph For each of the other indicated dump files (-fdump-rtl-pass), dump a representation of the control flow graph suitable for viewing with GraphViz to file.passid.pass.dot. Each function in the file is pretty-printed as a subgraph, so that GraphViz can render them all in a single plot. This option currently only works for RTL dumps, and the RTL is always dumped in slim form. vops Enable showing virtual operands for every statement. lineno Enable showing line numbers for statements. uid Enable showing the unique ID ("DECL_UID") for each variable. verbose Enable showing the tree dump for each statement. eh Enable showing the EH region number holding each statement. scev Enable showing scalar evolution analysis details. optimized Enable showing optimization information (only available in certain passes). missed Enable showing missed optimization information (only available in certain passes). note Enable other detailed optimization information (only available in certain passes). all Turn on all options, except raw, slim, verbose and lineno. optall Turn on all optimization options, i.e., optimized, missed, and note. To determine what tree dumps are available or find the dump for a pass of interest follow the steps below. 1. Invoke GCC with -fdump-passes and in the stderr output look for a code that corresponds to the pass you are interested in. For example, the codes "tree-evrp", "tree-vrp1", and "tree-vrp2" correspond to the three Value Range Propagation passes. The number at the end distinguishes distinct invocations of the same pass. 2. To enable the creation of the dump file, append the pass code to the -fdump- option prefix and invoke GCC with it. For example, to enable the dump from the Early Value Range Propagation pass, invoke GCC with the -fdump-tree-evrp option. Optionally, you may specify the name of the dump file. If you don't specify one, GCC creates as described below. 3. Find the pass dump in a file whose name is composed of three components separated by a period: the name of the source file GCC was invoked to compile, a numeric suffix indicating the pass number followed by the letter t for tree passes (and the letter r for RTL passes), and finally the pass code. For example, the Early VRP pass dump might be in a file named myfile.c.038t.evrp in the current working directory. Note that the numeric codes are not stable and may change from one version of GCC to another. -fopt-info -fopt-info-options -fopt-info-options=filename Controls optimization dumps from various optimization passes. If the -options form is used, options is a list of - separated option keywords to select the dump details and optimizations. The options can be divided into three groups: 1. options describing what kinds of messages should be emitted, 2. options describing the verbosity of the dump, and 3. options describing which optimizations should be included. The options from each group can be freely mixed as they are non-overlapping. However, in case of any conflicts, the later options override the earlier options on the command line. The following options control which kinds of messages should be emitted: optimized Print information when an optimization is successfully applied. It is up to a pass to decide which information is relevant. For example, the vectorizer passes print the source location of loops which are successfully vectorized. missed Print information about missed optimizations. Individual passes control which information to include in the output. note Print verbose information about optimizations, such as certain transformations, more detailed messages about decisions etc. all Print detailed optimization information. This includes optimized, missed, and note. The following option controls the dump verbosity: internals By default, only "high-level" messages are emitted. This option enables additional, more detailed, messages, which are likely to only be of interest to GCC developers. One or more of the following option keywords can be used to describe a group of optimizations: ipa Enable dumps from all interprocedural optimizations. loop Enable dumps from all loop optimizations. inline Enable dumps from all inlining optimizations. omp Enable dumps from all OMP (Offloading and Multi Processing) optimizations. vec Enable dumps from all vectorization optimizations. optall Enable dumps from all optimizations. This is a superset of the optimization groups listed above. If options is omitted, it defaults to optimized-optall, which means to dump messages about successful optimizations from all the passes, omitting messages that are treated as "internals". If the filename is provided, then the dumps from all the applicable optimizations are concatenated into the filename. Otherwise the dump is output onto stderr. Though multiple -fopt-info options are accepted, only one of them can include a filename. If other filenames are provided then all but the first such option are ignored. Note that the output filename is overwritten in case of multiple translation units. If a combined output from multiple translation units is desired, stderr should be used instead. In the following example, the optimization info is output to stderr: gcc -O3 -fopt-info This example: gcc -O3 -fopt-info-missed=missed.all outputs missed optimization report from all the passes into missed.all, and this one: gcc -O2 -ftree-vectorize -fopt-info-vec-missed prints information about missed optimization opportunities from vectorization passes on stderr. Note that -fopt-info-vec-missed is equivalent to -fopt-info-missed-vec. The order of the optimization group names and message types listed after -fopt-info does not matter. As another example, gcc -O3 -fopt-info-inline-optimized-missed=inline.txt outputs information about missed optimizations as well as optimized locations from all the inlining passes into inline.txt. Finally, consider: gcc -fopt-info-vec-missed=vec.miss -fopt-info-loop-optimized=loop.opt Here the two output filenames vec.miss and loop.opt are in conflict since only one output file is allowed. In this case, only the first option takes effect and the subsequent options are ignored. Thus only vec.miss is produced which contains dumps from the vectorizer about missed opportunities. -fsave-optimization-record Write a SRCFILE.opt-record.json.gz file detailing what optimizations were performed, for those optimizations that support -fopt-info. This option is experimental and the format of the data within the compressed JSON file is subject to change. It is roughly equivalent to a machine-readable version of -fopt-info-all, as a collection of messages with source file, line number and column number, with the following additional data for each message: * the execution count of the code being optimized, along with metadata about whether this was from actual profile data, or just an estimate, allowing consumers to prioritize messages by code hotness, * the function name of the code being optimized, where applicable, * the "inlining chain" for the code being optimized, so that when a function is inlined into several different places (which might themselves be inlined), the reader can distinguish between the copies, * objects identifying those parts of the message that refer to expressions, statements or symbol-table nodes, which of these categories they are, and, when available, their source code location, * the GCC pass that emitted the message, and * the location in GCC's own code from which the message was emitted Additionally, some messages are logically nested within other messages, reflecting implementation details of the optimization passes. -fsched-verbose=n On targets that use instruction scheduling, this option controls the amount of debugging output the scheduler prints to the dump files. For n greater than zero, -fsched-verbose outputs the same information as -fdump-rtl-sched1 and -fdump-rtl-sched2. For n greater than one, it also output basic block probabilities, detailed ready list information and unit/insn info. For n greater than two, it includes RTL at abort point, control- flow and regions info. And for n over four, -fsched-verbose also includes dependence info. -fenable-kind-pass -fdisable-kind-pass=range-list This is a set of options that are used to explicitly disable/enable optimization passes. These options are intended for use for debugging GCC. Compiler users should use regular options for enabling/disabling passes instead. -fdisable-ipa-pass Disable IPA pass pass. pass is the pass name. If the same pass is statically invoked in the compiler multiple times, the pass name should be appended with a sequential number starting from 1. -fdisable-rtl-pass -fdisable-rtl-pass=range-list Disable RTL pass pass. pass is the pass name. If the same pass is statically invoked in the compiler multiple times, the pass name should be appended with a sequential number starting from 1. range-list is a comma-separated list of function ranges or assembler names. Each range is a number pair separated by a colon. The range is inclusive in both ends. If the range is trivial, the number pair can be simplified as a single number. If the function's call graph node's uid falls within one of the specified ranges, the pass is disabled for that function. The uid is shown in the function header of a dump file, and the pass names can be dumped by using option -fdump-passes. -fdisable-tree-pass -fdisable-tree-pass=range-list Disable tree pass pass. See -fdisable-rtl for the description of option arguments. -fenable-ipa-pass Enable IPA pass pass. pass is the pass name. If the same pass is statically invoked in the compiler multiple times, the pass name should be appended with a sequential number starting from 1. -fenable-rtl-pass -fenable-rtl-pass=range-list Enable RTL pass pass. See -fdisable-rtl for option argument description and examples. -fenable-tree-pass -fenable-tree-pass=range-list Enable tree pass pass. See -fdisable-rtl for the description of option arguments. Here are some examples showing uses of these options. # disable ccp1 for all functions -fdisable-tree-ccp1 # disable complete unroll for function whose cgraph node uid is 1 -fenable-tree-cunroll=1 # disable gcse2 for functions at the following ranges [1,1], # [300,400], and [400,1000] # disable gcse2 for functions foo and foo2 -fdisable-rtl-gcse2=foo,foo2 # disable early inlining -fdisable-tree-einline # disable ipa inlining -fdisable-ipa-inline # enable tree full unroll -fenable-tree-unroll -fchecking -fchecking=n Enable internal consistency checking. The default depends on the compiler configuration. -fchecking=2 enables further internal consistency checking that might affect code generation. -frandom-seed=string This option provides a seed that GCC uses in place of random numbers in generating certain symbol names that have to be different in every compiled file. It is also used to place unique stamps in coverage data files and the object files that produce them. You can use the -frandom-seed option to produce reproducibly identical object files. The string can either be a number (decimal, octal or hex) or an arbitrary string (in which case it's converted to a number by computing CRC32). The string should be different for every file you compile. -save-temps -save-temps=cwd Store the usual "temporary" intermediate files permanently; place them in the current directory and name them based on the source file. Thus, compiling foo.c with -c -save-temps produces files foo.i and foo.s, as well as foo.o. This creates a preprocessed foo.i output file even though the compiler now normally uses an integrated preprocessor. When used in combination with the -x command-line option, -save-temps is sensible enough to avoid over writing an input source file with the same extension as an intermediate file. The corresponding intermediate file may be obtained by renaming the source file before using -save-temps. If you invoke GCC in parallel, compiling several different source files that share a common base name in different subdirectories or the same source file compiled for multiple output destinations, it is likely that the different parallel compilers will interfere with each other, and overwrite the temporary files. For instance: gcc -save-temps -o outdir1/foo.o indir1/foo.c& gcc -save-temps -o outdir2/foo.o indir2/foo.c& may result in foo.i and foo.o being written to simultaneously by both compilers. -save-temps=obj Store the usual "temporary" intermediate files permanently. If the -o option is used, the temporary files are based on the object file. If the -o option is not used, the -save-temps=obj switch behaves like -save-temps. For example: gcc -save-temps=obj -c foo.c gcc -save-temps=obj -c bar.c -o dir/xbar.o gcc -save-temps=obj foobar.c -o dir2/yfoobar creates foo.i, foo.s, dir/xbar.i, dir/xbar.s, dir2/yfoobar.i, dir2/yfoobar.s, and dir2/yfoobar.o. -time[=file] Report the CPU time taken by each subprocess in the compilation sequence. For C source files, this is the compiler proper and assembler (plus the linker if linking is done). Without the specification of an output file, the output looks like this: # cc1 0.12 0.01 # as 0.00 0.01 The first number on each line is the "user time", that is time spent executing the program itself. The second number is "system time", time spent executing operating system routines on behalf of the program. Both numbers are in seconds. With the specification of an output file, the output is appended to the named file, and it looks like this: 0.12 0.01 cc1 <options> 0.00 0.01 as <options> The "user time" and the "system time" are moved before the program name, and the options passed to the program are displayed, so that one can later tell what file was being compiled, and with which options. -fdump-final-insns[=file] Dump the final internal representation (RTL) to file. If the optional argument is omitted (or if file is "."), the name of the dump file is determined by appending ".gkd" to the compilation output file name. -fcompare-debug[=opts] If no error occurs during compilation, run the compiler a second time, adding opts and -fcompare-debug-second to the arguments passed to the second compilation. Dump the final internal representation in both compilations, and print an error if they differ. If the equal sign is omitted, the default -gtoggle is used. The environment variable GCC_COMPARE_DEBUG, if defined, non- empty and nonzero, implicitly enables -fcompare-debug. If GCC_COMPARE_DEBUG is defined to a string starting with a dash, then it is used for opts, otherwise the default -gtoggle is used. -fcompare-debug=, with the equal sign but without opts, is equivalent to -fno-compare-debug, which disables the dumping of the final representation and the second compilation, preventing even GCC_COMPARE_DEBUG from taking effect. To verify full coverage during -fcompare-debug testing, set GCC_COMPARE_DEBUG to say -fcompare-debug-not-overridden, which GCC rejects as an invalid option in any actual compilation (rather than preprocessing, assembly or linking). To get just a warning, setting GCC_COMPARE_DEBUG to -w%n-fcompare-debug not overridden will do. -fcompare-debug-second This option is implicitly passed to the compiler for the second compilation requested by -fcompare-debug, along with options to silence warnings, and omitting other options that would cause the compiler to produce output to files or to standard output as a side effect. Dump files and preserved temporary files are renamed so as to contain the ".gk" additional extension during the second compilation, to avoid overwriting those generated by the first. When this option is passed to the compiler driver, it causes the first compilation to be skipped, which makes it useful for little other than debugging the compiler proper. -gtoggle Turn off generation of debug info, if leaving out this option generates it, or turn it on at level 2 otherwise. The position of this argument in the command line does not matter; it takes effect after all other options are processed, and it does so only once, no matter how many times it is given. This is mainly intended to be used with -fcompare-debug. -fvar-tracking-assignments-toggle Toggle -fvar-tracking-assignments, in the same way that -gtoggle toggles -g. -Q Makes the compiler print out each function name as it is compiled, and print some statistics about each pass when it finishes. -ftime-report Makes the compiler print some statistics about the time consumed by each pass when it finishes. -ftime-report-details Record the time consumed by infrastructure parts separately for each pass. -fira-verbose=n Control the verbosity of the dump file for the integrated register allocator. The default value is 5. If the value n is greater or equal to 10, the dump output is sent to stderr using the same format as n minus 10. -flto-report Prints a report with internal details on the workings of the link-time optimizer. The contents of this report vary from version to version. It is meant to be useful to GCC developers when processing object files in LTO mode (via -flto). Disabled by default. -flto-report-wpa Like -flto-report, but only print for the WPA phase of Link Time Optimization. -fmem-report Makes the compiler print some statistics about permanent memory allocation when it finishes. -fmem-report-wpa Makes the compiler print some statistics about permanent memory allocation for the WPA phase only. -fpre-ipa-mem-report -fpost-ipa-mem-report Makes the compiler print some statistics about permanent memory allocation before or after interprocedural optimization. -fprofile-report Makes the compiler print some statistics about consistency of the (estimated) profile and effect of individual passes. -fstack-usage Makes the compiler output stack usage information for the program, on a per-function basis. The filename for the dump is made by appending .su to the auxname. auxname is generated from the name of the output file, if explicitly specified and it is not an executable, otherwise it is the basename of the source file. An entry is made up of three fields: * The name of the function. * A number of bytes. * One or more qualifiers: "static", "dynamic", "bounded". The qualifier "static" means that the function manipulates the stack statically: a fixed number of bytes are allocated for the frame on function entry and released on function exit; no stack adjustments are otherwise made in the function. The second field is this fixed number of bytes. The qualifier "dynamic" means that the function manipulates the stack dynamically: in addition to the static allocation described above, stack adjustments are made in the body of the function, for example to push/pop arguments around function calls. If the qualifier "bounded" is also present, the amount of these adjustments is bounded at compile time and the second field is an upper bound of the total amount of stack used by the function. If it is not present, the amount of these adjustments is not bounded at compile time and the second field only represents the bounded part. -fstats Emit statistics about front-end processing at the end of the compilation. This option is supported only by the C++ front end, and the information is generally only useful to the G++ development team. -fdbg-cnt-list Print the name and the counter upper bound for all debug counters. -fdbg-cnt=counter-value-list Set the internal debug counter lower and upper bound. counter-value-list is a comma-separated list of name:lower_bound:upper_bound tuples which sets the lower and the upper bound of each debug counter name. The lower_bound is optional and is zero initialized if not set. All debug counters have the initial upper bound of "UINT_MAX"; thus "dbg_cnt" returns true always unless the upper bound is set by this option. For example, with -fdbg-cnt=dce:2:4,tail_call:10, "dbg_cnt(dce)" returns true only for third and fourth invocation. For "dbg_cnt(tail_call)" true is returned for first 10 invocations. -print-file-name=library Print the full absolute name of the library file library that would be used when linking---and don't do anything else. With this option, GCC does not compile or link anything; it just prints the file name. -print-multi-directory Print the directory name corresponding to the multilib selected by any other switches present in the command line. This directory is supposed to exist in GCC_EXEC_PREFIX. -print-multi-lib Print the mapping from multilib directory names to compiler switches that enable them. The directory name is separated from the switches by ;, and each switch starts with an @ instead of the -, without spaces between multiple switches. This is supposed to ease shell processing. -print-multi-os-directory Print the path to OS libraries for the selected multilib, relative to some lib subdirectory. If OS libraries are present in the lib subdirectory and no multilibs are used, this is usually just ., if OS libraries are present in libsuffix sibling directories this prints e.g. ../lib64, ../lib or ../lib32, or if OS libraries are present in lib/subdir subdirectories it prints e.g. amd64, sparcv9 or ev6. -print-multiarch Print the path to OS libraries for the selected multiarch, relative to some lib subdirectory. -print-prog-name=program Like -print-file-name, but searches for a program such as cpp. -print-libgcc-file-name Same as -print-file-name=libgcc.a. This is useful when you use -nostdlib or -nodefaultlibs but you do want to link with libgcc.a. You can do: gcc -nostdlib <files>... `gcc -print-libgcc-file-name` -print-search-dirs Print the name of the configured installation directory and a list of program and library directories gcc searches---and don't do anything else. This is useful when gcc prints the error message installation problem, cannot exec cpp0: No such file or directory. To resolve this you either need to put cpp0 and the other compiler components where gcc expects to find them, or you can set the environment variable GCC_EXEC_PREFIX to the directory where you installed them. Don't forget the trailing /. -print-sysroot Print the target sysroot directory that is used during compilation. This is the target sysroot specified either at configure time or using the --sysroot option, possibly with an extra suffix that depends on compilation options. If no target sysroot is specified, the option prints nothing. -print-sysroot-headers-suffix Print the suffix added to the target sysroot when searching for headers, or give an error if the compiler is not configured with such a suffix---and don't do anything else. -dumpmachine Print the compiler's target machine (for example, i686-pc-linux-gnu)---and don't do anything else. -dumpversion Print the compiler version (for example, 3.0, 6.3.0 or 7)---and don't do anything else. This is the compiler version used in filesystem paths and specs. Depending on how the compiler has been configured it can be just a single number (major version), two numbers separated by a dot (major and minor version) or three numbers separated by dots (major, minor and patchlevel version). -dumpfullversion Print the full compiler version---and don't do anything else. The output is always three numbers separated by dots, major, minor and patchlevel version. -dumpspecs Print the compiler's built-in specs---and don't do anything else. (This is used when GCC itself is being built.) Machine-Dependent Options Each target machine supported by GCC can have its own options---for example, to allow you to compile for a particular processor variant or ABI, or to control optimizations specific to that machine. By convention, the names of machine-specific options start with -m. Some configurations of the compiler also support additional target-specific options, usually for compatibility with other compilers on the same platform. AArch64 Options These options are defined for AArch64 implementations: -mabi=name Generate code for the specified data model. Permissible values are ilp32 for SysV-like data model where int, long int and pointers are 32 bits, and lp64 for SysV-like data model where int is 32 bits, but long int and pointers are 64 bits. The default depends on the specific target configuration. Note that the LP64 and ILP32 ABIs are not link-compatible; you must compile your entire program with the same ABI, and link with a compatible set of libraries. -mbig-endian Generate big-endian code. This is the default when GCC is configured for an aarch64_be-*-* target. -mgeneral-regs-only Generate code which uses only the general-purpose registers. This will prevent the compiler from using floating-point and Advanced SIMD registers but will not impose any restrictions on the assembler. -mlittle-endian Generate little-endian code. This is the default when GCC is configured for an aarch64-*-* but not an aarch64_be-*-* target. -mcmodel=tiny Generate code for the tiny code model. The program and its statically defined symbols must be within 1MB of each other. Programs can be statically or dynamically linked. -mcmodel=small Generate code for the small code model. The program and its statically defined symbols must be within 4GB of each other. Programs can be statically or dynamically linked. This is the default code model. -mcmodel=large Generate code for the large code model. This makes no assumptions about addresses and sizes of sections. Programs can be statically linked only. -mstrict-align -mno-strict-align Avoid or allow generating memory accesses that may not be aligned on a natural object boundary as described in the architecture specification. -momit-leaf-frame-pointer -mno-omit-leaf-frame-pointer Omit or keep the frame pointer in leaf functions. The former behavior is the default. -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset Generate stack protection code using canary at guard. Supported locations are global for a global canary or sysreg for a canary in an appropriate system register. With the latter choice the options -mstack-protector-guard-reg=reg and -mstack-protector-guard-offset=offset furthermore specify which system register to use as base register for reading the canary, and from what offset from that base register. There is no default register or offset as this is entirely for use within the Linux kernel. -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset Generate stack protection code using canary at guard. Supported locations are global for a global canary or sysreg for a canary in an appropriate system register. With the latter choice the options -mstack-protector-guard-reg=reg and -mstack-protector-guard-offset=offset furthermore specify which system register to use as base register for reading the canary, and from what offset from that base register. There is no default register or offset as this is entirely for use within the Linux kernel. -mtls-dialect=desc Use TLS descriptors as the thread-local storage mechanism for dynamic accesses of TLS variables. This is the default. -mtls-dialect=traditional Use traditional TLS as the thread-local storage mechanism for dynamic accesses of TLS variables. -mtls-size=size Specify bit size of immediate TLS offsets. Valid values are 12, 24, 32, 48. This option requires binutils 2.26 or newer. -mfix-cortex-a53-835769 -mno-fix-cortex-a53-835769 Enable or disable the workaround for the ARM Cortex-A53 erratum number 835769. This involves inserting a NOP instruction between memory instructions and 64-bit integer multiply-accumulate instructions. -mfix-cortex-a53-843419 -mno-fix-cortex-a53-843419 Enable or disable the workaround for the ARM Cortex-A53 erratum number 843419. This erratum workaround is made at link time and this will only pass the corresponding flag to the linker. -mlow-precision-recip-sqrt -mno-low-precision-recip-sqrt Enable or disable the reciprocal square root approximation. This option only has an effect if -ffast-math or -funsafe-math-optimizations is used as well. Enabling this reduces precision of reciprocal square root results to about 16 bits for single precision and to 32 bits for double precision. -mlow-precision-sqrt -mno-low-precision-sqrt Enable or disable the square root approximation. This option only has an effect if -ffast-math or -funsafe-math-optimizations is used as well. Enabling this reduces precision of square root results to about 16 bits for single precision and to 32 bits for double precision. If enabled, it implies -mlow-precision-recip-sqrt. -mlow-precision-div -mno-low-precision-div Enable or disable the division approximation. This option only has an effect if -ffast-math or -funsafe-math-optimizations is used as well. Enabling this reduces precision of division results to about 16 bits for single precision and to 32 bits for double precision. -mtrack-speculation -mno-track-speculation Enable or disable generation of additional code to track speculative execution through conditional branches. The tracking state can then be used by the compiler when expanding calls to "__builtin_speculation_safe_copy" to permit a more efficient code sequence to be generated. -moutline-atomics -mno-outline-atomics Enable or disable calls to out-of-line helpers to implement atomic operations. These helpers will, at runtime, determine if the LSE instructions from ARMv8.1-A can be used; if not, they will use the load/store-exclusive instructions that are present in the base ARMv8.0 ISA. This option is only applicable when compiling for the base ARMv8.0 instruction set. If using a later revision, e.g. -march=armv8.1-a or -march=armv8-a+lse, the ARMv8.1-Atomics instructions will be used directly. The same applies when using -mcpu= when the selected cpu supports the lse feature. -march=name Specify the name of the target architecture and, optionally, one or more feature modifiers. This option has the form -march=arch{+[no]feature}*. The permissible values for arch are armv8-a, armv8.1-a, armv8.2-a, armv8.3-a, armv8.4-a, armv8.5-a or native. The value armv8.5-a implies armv8.4-a and enables compiler support for the ARMv8.5-A architecture extensions. The value armv8.4-a implies armv8.3-a and enables compiler support for the ARMv8.4-A architecture extensions. The value armv8.3-a implies armv8.2-a and enables compiler support for the ARMv8.3-A architecture extensions. The value armv8.2-a implies armv8.1-a and enables compiler support for the ARMv8.2-A architecture extensions. The value armv8.1-a implies armv8-a and enables compiler support for the ARMv8.1-A architecture extension. In particular, it enables the +crc, +lse, and +rdma features. The value native is available on native AArch64 GNU/Linux and causes the compiler to pick the architecture of the host system. This option has no effect if the compiler is unable to recognize the architecture of the host system, The permissible values for feature are listed in the sub- section on aarch64-feature-modifiers,,-march and -mcpu Feature Modifiers. Where conflicting feature modifiers are specified, the right-most feature is used. GCC uses name to determine what kind of instructions it can emit when generating assembly code. If -march is specified without either of -mtune or -mcpu also being specified, the code is tuned to perform well across a range of target processors implementing the target architecture. -mtune=name Specify the name of the target processor for which GCC should tune the performance of the code. Permissible values for this option are: generic, cortex-a35, cortex-a53, cortex-a55, cortex-a57, cortex-a72, cortex-a73, cortex-a75, cortex-a76, ares, exynos-m1, emag, falkor, neoverse-e1, neoverse-n1, neoverse-n2, neoverse-v1, neoverse-512tvb, qdf24xx, saphira, phecda, xgene1, vulcan, octeontx, octeontx81, octeontx83, a64fx, thunderx, thunderxt88, thunderxt88p1, thunderxt81, tsv110, thunderxt83, thunderx2t99, zeus, cortex-a57.cortex-a53, cortex-a72.cortex-a53, cortex-a73.cortex-a35, cortex-a73.cortex-a53, cortex-a75.cortex-a55, cortex-a76.cortex-a55 native. The values cortex-a57.cortex-a53, cortex-a72.cortex-a53, cortex-a73.cortex-a35, cortex-a73.cortex-a53, cortex-a75.cortex-a55, cortex-a76.cortex-a55 specify that GCC should tune for a big.LITTLE system. The value neoverse-512tvb specifies that GCC should tune for Neoverse cores that (a) implement SVE and (b) have a total vector bandwidth of 512 bits per cycle. In other words, the option tells GCC to tune for Neoverse cores that can execute 4 128-bit Advanced SIMD arithmetic instructions a cycle and that can execute an equivalent number of SVE arithmetic instructions per cycle (2 for 256-bit SVE, 4 for 128-bit SVE). This is more general than tuning for a specific core like Neoverse V1 but is more specific than the default tuning described below. Additionally on native AArch64 GNU/Linux systems the value native tunes performance to the host system. This option has no effect if the compiler is unable to recognize the processor of the host system. Where none of -mtune=, -mcpu= or -march= are specified, the code is tuned to perform well across a range of target processors. This option cannot be suffixed by feature modifiers. -mcpu=name Specify the name of the target processor, optionally suffixed by one or more feature modifiers. This option has the form -mcpu=cpu{+[no]feature}*, where the permissible values for cpu are the same as those available for -mtune. The permissible values for feature are documented in the sub- section on aarch64-feature-modifiers,,-march and -mcpu Feature Modifiers. Where conflicting feature modifiers are specified, the right-most feature is used. GCC uses name to determine what kind of instructions it can emit when generating assembly code (as if by -march) and to determine the target processor for which to tune for performance (as if by -mtune). Where this option is used in conjunction with -march or -mtune, those options take precedence over the appropriate part of this option. -mcpu=neoverse-512tvb is special in that it does not refer to a specific core, but instead refers to all Neoverse cores that (a) implement SVE and (b) have a total vector bandwidth of 512 bits a cycle. Unless overridden by -march, -mcpu=neoverse-512tvb generates code that can run on a Neoverse V1 core, since Neoverse V1 is the first Neoverse core with these properties. Unless overridden by -mtune, -mcpu=neoverse-512tvb tunes code in the same way as for -mtune=neoverse-512tvb. -moverride=string Override tuning decisions made by the back-end in response to a -mtune= switch. The syntax, semantics, and accepted values for string in this option are not guaranteed to be consistent across releases. This option is only intended to be useful when developing GCC. -mverbose-cost-dump Enable verbose cost model dumping in the debug dump files. This option is provided for use in debugging the compiler. -mpc-relative-literal-loads -mno-pc-relative-literal-loads Enable or disable PC-relative literal loads. With this option literal pools are accessed using a single instruction and emitted after each function. This limits the maximum size of functions to 1MB. This is enabled by default for -mcmodel=tiny. -msign-return-address=scope Select the function scope on which return address signing will be applied. Permissible values are none, which disables return address signing, non-leaf, which enables pointer signing for functions which are not leaf functions, and all, which enables pointer signing for all functions. The default value is none. This option has been deprecated by -mbranch-protection. -mbranch-protection=none|standard|pac-ret[+leaf]|bti Select the branch protection features to use. none is the default and turns off all types of branch protection. standard turns on all types of branch protection features. If a feature has additional tuning options, then standard sets it to its standard level. pac-ret[+leaf] turns on return address signing to its standard level: signing functions that save the return address to memory (non-leaf functions will practically always do this) using the a-key. The optional argument leaf can be used to extend the signing to include leaf functions. bti turns on branch target identification mechanism. -mharden-sls=opts Enable compiler hardening against straight line speculation (SLS). opts is a comma-separated list of the following options: retbr blr In addition, -mharden-sls=all enables all SLS hardening while -mharden-sls=none disables all SLS hardening. -msve-vector-bits=bits Specify the number of bits in an SVE vector register. This option only has an effect when SVE is enabled. GCC supports two forms of SVE code generation: "vector-length agnostic" output that works with any size of vector register and "vector-length specific" output that allows GCC to make assumptions about the vector length when it is useful for optimization reasons. The possible values of bits are: scalable, 128, 256, 512, 1024 and 2048. Specifying scalable selects vector-length agnostic output. At present -msve-vector-bits=128 also generates vector-length agnostic output. All other values generate vector-length specific code. The behavior of these values may change in future releases and no value except scalable should be relied on for producing code that is portable across different hardware SVE vector lengths. The default is -msve-vector-bits=scalable, which produces vector-length agnostic code. -march and -mcpu Feature Modifiers Feature modifiers used with -march and -mcpu can be any of the following and their inverses nofeature: crc Enable CRC extension. This is on by default for -march=armv8.1-a. crypto Enable Crypto extension. This also enables Advanced SIMD and floating-point instructions. fp Enable floating-point instructions. This is on by default for all possible values for options -march and -mcpu. simd Enable Advanced SIMD instructions. This also enables floating-point instructions. This is on by default for all possible values for options -march and -mcpu. sve Enable Scalable Vector Extension instructions. This also enables Advanced SIMD and floating-point instructions. lse Enable Large System Extension instructions. This is on by default for -march=armv8.1-a. rdma Enable Round Double Multiply Accumulate instructions. This is on by default for -march=armv8.1-a. fp16 Enable FP16 extension. This also enables floating-point instructions. fp16fml Enable FP16 fmla extension. This also enables FP16 extensions and floating-point instructions. This option is enabled by default for -march=armv8.4-a. Use of this option with architectures prior to Armv8.2-A is not supported. rcpc Enable the RcPc extension. This does not change code generation from GCC, but is passed on to the assembler, enabling inline asm statements to use instructions from the RcPc extension. dotprod Enable the Dot Product extension. This also enables Advanced SIMD instructions. aes Enable the Armv8-a aes and pmull crypto extension. This also enables Advanced SIMD instructions. sha2 Enable the Armv8-a sha2 crypto extension. This also enables Advanced SIMD instructions. sha3 Enable the sha512 and sha3 crypto extension. This also enables Advanced SIMD instructions. Use of this option with architectures prior to Armv8.2-A is not supported. sm4 Enable the sm3 and sm4 crypto extension. This also enables Advanced SIMD instructions. Use of this option with architectures prior to Armv8.2-A is not supported. profile Enable the Statistical Profiling extension. This option is only to enable the extension at the assembler level and does not affect code generation. rng Enable the Armv8.5-a Random Number instructions. This option is only to enable the extension at the assembler level and does not affect code generation. memtag Enable the Armv8.5-a Memory Tagging Extensions. This option is only to enable the extension at the assembler level and does not affect code generation. sb Enable the Armv8-a Speculation Barrier instruction. This option is only to enable the extension at the assembler level and does not affect code generation. This option is enabled by default for -march=armv8.5-a. ssbs Enable the Armv8-a Speculative Store Bypass Safe instruction. This option is only to enable the extension at the assembler level and does not affect code generation. This option is enabled by default for -march=armv8.5-a. predres Enable the Armv8-a Execution and Data Prediction Restriction instructions. This option is only to enable the extension at the assembler level and does not affect code generation. This option is enabled by default for -march=armv8.5-a. Feature crypto implies aes, sha2, and simd, which implies fp. Conversely, nofp implies nosimd, which implies nocrypto, noaes and nosha2. Adapteva Epiphany Options These -m options are defined for Adapteva Epiphany: -mhalf-reg-file Don't allocate any register in the range "r32"..."r63". That allows code to run on hardware variants that lack these registers. -mprefer-short-insn-regs Preferentially allocate registers that allow short instruction generation. This can result in increased instruction count, so this may either reduce or increase overall code size. -mbranch-cost=num Set the cost of branches to roughly num "simple" instructions. This cost is only a heuristic and is not guaranteed to produce consistent results across releases. -mcmove Enable the generation of conditional moves. -mnops=num Emit num NOPs before every other generated instruction. -mno-soft-cmpsf For single-precision floating-point comparisons, emit an "fsub" instruction and test the flags. This is faster than a software comparison, but can get incorrect results in the presence of NaNs, or when two different small numbers are compared such that their difference is calculated as zero. The default is -msoft-cmpsf, which uses slower, but IEEE- compliant, software comparisons. -mstack-offset=num Set the offset between the top of the stack and the stack pointer. E.g., a value of 8 means that the eight bytes in the range "sp+0...sp+7" can be used by leaf functions without stack allocation. Values other than 8 or 16 are untested and unlikely to work. Note also that this option changes the ABI; compiling a program with a different stack offset than the libraries have been compiled with generally does not work. This option can be useful if you want to evaluate if a different stack offset would give you better code, but to actually use a different stack offset to build working programs, it is recommended to configure the toolchain with the appropriate --with-stack-offset=num option. -mno-round-nearest Make the scheduler assume that the rounding mode has been set to truncating. The default is -mround-nearest. -mlong-calls If not otherwise specified by an attribute, assume all calls might be beyond the offset range of the "b" / "bl" instructions, and therefore load the function address into a register before performing a (otherwise direct) call. This is the default. -mshort-calls If not otherwise specified by an attribute, assume all direct calls are in the range of the "b" / "bl" instructions, so use these instructions for direct calls. The default is -mlong-calls. -msmall16 Assume addresses can be loaded as 16-bit unsigned values. This does not apply to function addresses for which -mlong-calls semantics are in effect. -mfp-mode=mode Set the prevailing mode of the floating-point unit. This determines the floating-point mode that is provided and expected at function call and return time. Making this mode match the mode you predominantly need at function start can make your programs smaller and faster by avoiding unnecessary mode switches. mode can be set to one the following values: caller Any mode at function entry is valid, and retained or restored when the function returns, and when it calls other functions. This mode is useful for compiling libraries or other compilation units you might want to incorporate into different programs with different prevailing FPU modes, and the convenience of being able to use a single object file outweighs the size and speed overhead for any extra mode switching that might be needed, compared with what would be needed with a more specific choice of prevailing FPU mode. truncate This is the mode used for floating-point calculations with truncating (i.e. round towards zero) rounding mode. That includes conversion from floating point to integer. round-nearest This is the mode used for floating-point calculations with round-to-nearest-or-even rounding mode. int This is the mode used to perform integer calculations in the FPU, e.g. integer multiply, or integer multiply-and- accumulate. The default is -mfp-mode=caller -mno-split-lohi -mno-postinc -mno-postmodify Code generation tweaks that disable, respectively, splitting of 32-bit loads, generation of post-increment addresses, and generation of post-modify addresses. The defaults are msplit-lohi, -mpost-inc, and -mpost-modify. -mnovect-double Change the preferred SIMD mode to SImode. The default is -mvect-double, which uses DImode as preferred SIMD mode. -max-vect-align=num The maximum alignment for SIMD vector mode types. num may be 4 or 8. The default is 8. Note that this is an ABI change, even though many library function interfaces are unaffected if they don't use SIMD vector modes in places that affect size and/or alignment of relevant types. -msplit-vecmove-early Split vector moves into single word moves before reload. In theory this can give better register allocation, but so far the reverse seems to be generally the case. -m1reg-reg Specify a register to hold the constant -1, which makes loading small negative constants and certain bitmasks faster. Allowable values for reg are r43 and r63, which specify use of that register as a fixed register, and none, which means that no register is used for this purpose. The default is -m1reg-none. AMD GCN Options These options are defined specifically for the AMD GCN port. -march=gpu -mtune=gpu Set architecture type or tuning for gpu. Supported values for gpu are fiji Compile for GCN3 Fiji devices (gfx803). gfx900 Compile for GCN5 Vega 10 devices (gfx900). -mstack-size=bytes Specify how many bytes of stack space will be requested for each GPU thread (wave-front). Beware that there may be many threads and limited memory available. The size of the stack allocation may also have an impact on run-time performance. The default is 32KB when using OpenACC or OpenMP, and 1MB otherwise. ARC Options The following options control the architecture variant for which code is being compiled: -mbarrel-shifter Generate instructions supported by barrel shifter. This is the default unless -mcpu=ARC601 or -mcpu=ARCEM is in effect. -mjli-always Force to call a function using jli_s instruction. This option is valid only for ARCv2 architecture. -mcpu=cpu Set architecture type, register usage, and instruction scheduling parameters for cpu. There are also shortcut alias options available for backward compatibility and convenience. Supported values for cpu are arc600 Compile for ARC600. Aliases: -mA6, -mARC600. arc601 Compile for ARC601. Alias: -mARC601. arc700 Compile for ARC700. Aliases: -mA7, -mARC700. This is the default when configured with --with-cpu=arc700. arcem Compile for ARC EM. archs Compile for ARC HS. em Compile for ARC EM CPU with no hardware extensions. em4 Compile for ARC EM4 CPU. em4_dmips Compile for ARC EM4 DMIPS CPU. em4_fpus Compile for ARC EM4 DMIPS CPU with the single-precision floating-point extension. em4_fpuda Compile for ARC EM4 DMIPS CPU with single-precision floating-point and double assist instructions. hs Compile for ARC HS CPU with no hardware extensions except the atomic instructions. hs34 Compile for ARC HS34 CPU. hs38 Compile for ARC HS38 CPU. hs38_linux Compile for ARC HS38 CPU with all hardware extensions on. arc600_norm Compile for ARC 600 CPU with "norm" instructions enabled. arc600_mul32x16 Compile for ARC 600 CPU with "norm" and 32x16-bit multiply instructions enabled. arc600_mul64 Compile for ARC 600 CPU with "norm" and "mul64"-family instructions enabled. arc601_norm Compile for ARC 601 CPU with "norm" instructions enabled. arc601_mul32x16 Compile for ARC 601 CPU with "norm" and 32x16-bit multiply instructions enabled. arc601_mul64 Compile for ARC 601 CPU with "norm" and "mul64"-family instructions enabled. nps400 Compile for ARC 700 on NPS400 chip. em_mini Compile for ARC EM minimalist configuration featuring reduced register set. -mdpfp -mdpfp-compact Generate double-precision FPX instructions, tuned for the compact implementation. -mdpfp-fast Generate double-precision FPX instructions, tuned for the fast implementation. -mno-dpfp-lrsr Disable "lr" and "sr" instructions from using FPX extension aux registers. -mea Generate extended arithmetic instructions. Currently only "divaw", "adds", "subs", and "sat16" are supported. This is always enabled for -mcpu=ARC700. -mno-mpy Do not generate "mpy"-family instructions for ARC700. This option is deprecated. -mmul32x16 Generate 32x16-bit multiply and multiply-accumulate instructions. -mmul64 Generate "mul64" and "mulu64" instructions. Only valid for -mcpu=ARC600. -mnorm Generate "norm" instructions. This is the default if -mcpu=ARC700 is in effect. -mspfp -mspfp-compact Generate single-precision FPX instructions, tuned for the compact implementation. -mspfp-fast Generate single-precision FPX instructions, tuned for the fast implementation. -msimd Enable generation of ARC SIMD instructions via target- specific builtins. Only valid for -mcpu=ARC700. -msoft-float This option ignored; it is provided for compatibility purposes only. Software floating-point code is emitted by default, and this default can overridden by FPX options; -mspfp, -mspfp-compact, or -mspfp-fast for single precision, and -mdpfp, -mdpfp-compact, or -mdpfp-fast for double precision. -mswap Generate "swap" instructions. -matomic This enables use of the locked load/store conditional extension to implement atomic memory built-in functions. Not available for ARC 6xx or ARC EM cores. -mdiv-rem Enable "div" and "rem" instructions for ARCv2 cores. -mcode-density Enable code density instructions for ARC EM. This option is on by default for ARC HS. -mll64 Enable double load/store operations for ARC HS cores. -mtp-regno=regno Specify thread pointer register number. -mmpy-option=multo Compile ARCv2 code with a multiplier design option. You can specify the option using either a string or numeric value for multo. wlh1 is the default value. The recognized values are: 0 none No multiplier available. 1 w 16x16 multiplier, fully pipelined. The following instructions are enabled: "mpyw" and "mpyuw". 2 wlh1 32x32 multiplier, fully pipelined (1 stage). The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 3 wlh2 32x32 multiplier, fully pipelined (2 stages). The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 4 wlh3 Two 16x16 multipliers, blocking, sequential. The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 5 wlh4 One 16x16 multiplier, blocking, sequential. The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 6 wlh5 One 32x4 multiplier, blocking, sequential. The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 7 plus_dmpy ARC HS SIMD support. 8 plus_macd ARC HS SIMD support. 9 plus_qmacw ARC HS SIMD support. This option is only available for ARCv2 cores. -mfpu=fpu Enables support for specific floating-point hardware extensions for ARCv2 cores. Supported values for fpu are: fpus Enables support for single-precision floating-point hardware extensions. fpud Enables support for double-precision floating-point hardware extensions. The single-precision floating-point extension is also enabled. Not available for ARC EM. fpuda Enables support for double-precision floating-point hardware extensions using double-precision assist instructions. The single-precision floating-point extension is also enabled. This option is only available for ARC EM. fpuda_div Enables support for double-precision floating-point hardware extensions using double-precision assist instructions. The single-precision floating-point, square-root, and divide extensions are also enabled. This option is only available for ARC EM. fpuda_fma Enables support for double-precision floating-point hardware extensions using double-precision assist instructions. The single-precision floating-point and fused multiply and add hardware extensions are also enabled. This option is only available for ARC EM. fpuda_all Enables support for double-precision floating-point hardware extensions using double-precision assist instructions. All single-precision floating-point hardware extensions are also enabled. This option is only available for ARC EM. fpus_div Enables support for single-precision floating-point, square-root and divide hardware extensions. fpud_div Enables support for double-precision floating-point, square-root and divide hardware extensions. This option includes option fpus_div. Not available for ARC EM. fpus_fma Enables support for single-precision floating-point and fused multiply and add hardware extensions. fpud_fma Enables support for double-precision floating-point and fused multiply and add hardware extensions. This option includes option fpus_fma. Not available for ARC EM. fpus_all Enables support for all single-precision floating-point hardware extensions. fpud_all Enables support for all single- and double-precision floating-point hardware extensions. Not available for ARC EM. -mirq-ctrl-saved=register-range, blink, lp_count Specifies general-purposes registers that the processor automatically saves/restores on interrupt entry and exit. register-range is specified as two registers separated by a dash. The register range always starts with "r0", the upper limit is "fp" register. blink and lp_count are optional. This option is only valid for ARC EM and ARC HS cores. -mrgf-banked-regs=number Specifies the number of registers replicated in second register bank on entry to fast interrupt. Fast interrupts are interrupts with the highest priority level P0. These interrupts save only PC and STATUS32 registers to avoid memory transactions during interrupt entry and exit sequences. Use this option when you are using fast interrupts in an ARC V2 family processor. Permitted values are 4, 8, 16, and 32. -mlpc-width=width Specify the width of the "lp_count" register. Valid values for width are 8, 16, 20, 24, 28 and 32 bits. The default width is fixed to 32 bits. If the width is less than 32, the compiler does not attempt to transform loops in your program to use the zero-delay loop mechanism unless it is known that the "lp_count" register can hold the required loop-counter value. Depending on the width specified, the compiler and run-time library might continue to use the loop mechanism for various needs. This option defines macro "__ARC_LPC_WIDTH__" with the value of width. -mrf16 This option instructs the compiler to generate code for a 16-entry register file. This option defines the "__ARC_RF16__" preprocessor macro. -mbranch-index Enable use of "bi" or "bih" instructions to implement jump tables. The following options are passed through to the assembler, and also define preprocessor macro symbols. -mdsp-packa Passed down to the assembler to enable the DSP Pack A extensions. Also sets the preprocessor symbol "__Xdsp_packa". This option is deprecated. -mdvbf Passed down to the assembler to enable the dual Viterbi butterfly extension. Also sets the preprocessor symbol "__Xdvbf". This option is deprecated. -mlock Passed down to the assembler to enable the locked load/store conditional extension. Also sets the preprocessor symbol "__Xlock". -mmac-d16 Passed down to the assembler. Also sets the preprocessor symbol "__Xxmac_d16". This option is deprecated. -mmac-24 Passed down to the assembler. Also sets the preprocessor symbol "__Xxmac_24". This option is deprecated. -mrtsc Passed down to the assembler to enable the 64-bit time-stamp counter extension instruction. Also sets the preprocessor symbol "__Xrtsc". This option is deprecated. -mswape Passed down to the assembler to enable the swap byte ordering extension instruction. Also sets the preprocessor symbol "__Xswape". -mtelephony Passed down to the assembler to enable dual- and single- operand instructions for telephony. Also sets the preprocessor symbol "__Xtelephony". This option is deprecated. -mxy Passed down to the assembler to enable the XY memory extension. Also sets the preprocessor symbol "__Xxy". The following options control how the assembly code is annotated: -misize Annotate assembler instructions with estimated addresses. -mannotate-align Explain what alignment considerations lead to the decision to make an instruction short or long. The following options are passed through to the linker: -marclinux Passed through to the linker, to specify use of the "arclinux" emulation. This option is enabled by default in tool chains built for "arc-linux-uclibc" and "arceb-linux-uclibc" targets when profiling is not requested. -marclinux_prof Passed through to the linker, to specify use of the "arclinux_prof" emulation. This option is enabled by default in tool chains built for "arc-linux-uclibc" and "arceb-linux-uclibc" targets when profiling is requested. The following options control the semantics of generated code: -mlong-calls Generate calls as register indirect calls, thus providing access to the full 32-bit address range. -mmedium-calls Don't use less than 25-bit addressing range for calls, which is the offset available for an unconditional branch-and-link instruction. Conditional execution of function calls is suppressed, to allow use of the 25-bit range, rather than the 21-bit range with conditional branch-and-link. This is the default for tool chains built for "arc-linux-uclibc" and "arceb-linux-uclibc" targets. -G num Put definitions of externally-visible data in a small data section if that data is no bigger than num bytes. The default value of num is 4 for any ARC configuration, or 8 when we have double load/store operations. -mno-sdata Do not generate sdata references. This is the default for tool chains built for "arc-linux-uclibc" and "arceb-linux-uclibc" targets. -mvolatile-cache Use ordinarily cached memory accesses for volatile references. This is the default. -mno-volatile-cache Enable cache bypass for volatile references. The following options fine tune code generation: -malign-call Do alignment optimizations for call instructions. -mauto-modify-reg Enable the use of pre/post modify with register displacement. -mbbit-peephole Enable bbit peephole2. -mno-brcc This option disables a target-specific pass in arc_reorg to generate compare-and-branch ("brcc") instructions. It has no effect on generation of these instructions driven by the combiner pass. -mcase-vector-pcrel Use PC-relative switch case tables to enable case table shortening. This is the default for -Os. -mcompact-casesi Enable compact "casesi" pattern. This is the default for -Os, and only available for ARCv1 cores. This option is deprecated. -mno-cond-exec Disable the ARCompact-specific pass to generate conditional execution instructions. Due to delay slot scheduling and interactions between operand numbers, literal sizes, instruction lengths, and the support for conditional execution, the target-independent pass to generate conditional execution is often lacking, so the ARC port has kept a special pass around that tries to find more conditional execution generation opportunities after register allocation, branch shortening, and delay slot scheduling have been done. This pass generally, but not always, improves performance and code size, at the cost of extra compilation time, which is why there is an option to switch it off. If you have a problem with call instructions exceeding their allowable offset range because they are conditionalized, you should consider using -mmedium-calls instead. -mearly-cbranchsi Enable pre-reload use of the "cbranchsi" pattern. -mexpand-adddi Expand "adddi3" and "subdi3" at RTL generation time into "add.f", "adc" etc. This option is deprecated. -mindexed-loads Enable the use of indexed loads. This can be problematic because some optimizers then assume that indexed stores exist, which is not the case. -mlra Enable Local Register Allocation. This is still experimental for ARC, so by default the compiler uses standard reload (i.e. -mno-lra). -mlra-priority-none Don't indicate any priority for target registers. -mlra-priority-compact Indicate target register priority for r0..r3 / r12..r15. -mlra-priority-noncompact Reduce target register priority for r0..r3 / r12..r15. -mmillicode When optimizing for size (using -Os), prologues and epilogues that have to save or restore a large number of registers are often shortened by using call to a special function in libgcc; this is referred to as a millicode call. As these calls can pose performance issues, and/or cause linking issues when linking in a nonstandard way, this option is provided to turn on or off millicode call generation. -mcode-density-frame This option enable the compiler to emit "enter" and "leave" instructions. These instructions are only valid for CPUs with code-density feature. -mmixed-code Tweak register allocation to help 16-bit instruction generation. This generally has the effect of decreasing the average instruction size while increasing the instruction count. -mq-class Enable q instruction alternatives. This is the default for -Os. -mRcq Enable Rcq constraint handling. Most short code generation depends on this. This is the default. -mRcw Enable Rcw constraint handling. Most ccfsm condexec mostly depends on this. This is the default. -msize-level=level Fine-tune size optimization with regards to instruction lengths and alignment. The recognized values for level are: 0 No size optimization. This level is deprecated and treated like 1. 1 Short instructions are used opportunistically. 2 In addition, alignment of loops and of code after barriers are dropped. 3 In addition, optional data alignment is dropped, and the option Os is enabled. This defaults to 3 when -Os is in effect. Otherwise, the behavior when this is not set is equivalent to level 1. -mtune=cpu Set instruction scheduling parameters for cpu, overriding any implied by -mcpu=. Supported values for cpu are ARC600 Tune for ARC600 CPU. ARC601 Tune for ARC601 CPU. ARC700 Tune for ARC700 CPU with standard multiplier block. ARC700-xmac Tune for ARC700 CPU with XMAC block. ARC725D Tune for ARC725D CPU. ARC750D Tune for ARC750D CPU. -mmultcost=num Cost to assume for a multiply instruction, with 4 being equal to a normal instruction. -munalign-prob-threshold=probability Set probability threshold for unaligning branches. When tuning for ARC700 and optimizing for speed, branches without filled delay slot are preferably emitted unaligned and long, unless profiling indicates that the probability for the branch to be taken is below probability. The default is (REG_BR_PROB_BASE/2), i.e. 5000. The following options are maintained for backward compatibility, but are now deprecated and will be removed in a future release: -margonaut Obsolete FPX. -mbig-endian -EB Compile code for big-endian targets. Use of these options is now deprecated. Big-endian code is supported by configuring GCC to build "arceb-elf32" and "arceb-linux-uclibc" targets, for which big endian is the default. -mlittle-endian -EL Compile code for little-endian targets. Use of these options is now deprecated. Little-endian code is supported by configuring GCC to build "arc-elf32" and "arc-linux-uclibc" targets, for which little endian is the default. -mbarrel_shifter Replaced by -mbarrel-shifter. -mdpfp_compact Replaced by -mdpfp-compact. -mdpfp_fast Replaced by -mdpfp-fast. -mdsp_packa Replaced by -mdsp-packa. -mEA Replaced by -mea. -mmac_24 Replaced by -mmac-24. -mmac_d16 Replaced by -mmac-d16. -mspfp_compact Replaced by -mspfp-compact. -mspfp_fast Replaced by -mspfp-fast. -mtune=cpu Values arc600, arc601, arc700 and arc700-xmac for cpu are replaced by ARC600, ARC601, ARC700 and ARC700-xmac respectively. -multcost=num Replaced by -mmultcost. ARM Options These -m options are defined for the ARM port: -mabi=name Generate code for the specified ABI. Permissible values are: apcs-gnu, atpcs, aapcs, aapcs-linux and iwmmxt. -mapcs-frame Generate a stack frame that is compliant with the ARM Procedure Call Standard for all functions, even if this is not strictly necessary for correct execution of the code. Specifying -fomit-frame-pointer with this option causes the stack frames not to be generated for leaf functions. The default is -mno-apcs-frame. This option is deprecated. -mapcs This is a synonym for -mapcs-frame and is deprecated. -mthumb-interwork Generate code that supports calling between the ARM and Thumb instruction sets. Without this option, on pre-v5 architectures, the two instruction sets cannot be reliably used inside one program. The default is -mno-thumb-interwork, since slightly larger code is generated when -mthumb-interwork is specified. In AAPCS configurations this option is meaningless. -mno-sched-prolog Prevent the reordering of instructions in the function prologue, or the merging of those instruction with the instructions in the function's body. This means that all functions start with a recognizable set of instructions (or in fact one of a choice from a small set of different function prologues), and this information can be used to locate the start of functions inside an executable piece of code. The default is -msched-prolog. -mfloat-abi=name Specifies which floating-point ABI to use. Permissible values are: soft, softfp and hard. Specifying soft causes GCC to generate output containing library calls for floating-point operations. softfp allows the generation of code using hardware floating-point instructions, but still uses the soft-float calling conventions. hard allows generation of floating-point instructions and uses FPU-specific calling conventions. The default depends on the specific target configuration. Note that the hard-float and soft-float ABIs are not link- compatible; you must compile your entire program with the same ABI, and link with a compatible set of libraries. -mgeneral-regs-only Generate code which uses only the general-purpose registers. This will prevent the compiler from using floating-point and Advanced SIMD registers but will not impose any restrictions on the assembler. -mlittle-endian Generate code for a processor running in little-endian mode. This is the default for all standard configurations. -mbig-endian Generate code for a processor running in big-endian mode; the default is to compile code for a little-endian processor. -mbe8 -mbe32 When linking a big-endian image select between BE8 and BE32 formats. The option has no effect for little-endian images and is ignored. The default is dependent on the selected target architecture. For ARMv6 and later architectures the default is BE8, for older architectures the default is BE32. BE32 format has been deprecated by ARM. -march=name[+extension...] This specifies the name of the target ARM architecture. GCC uses this name to determine what kind of instructions it can emit when generating assembly code. This option can be used in conjunction with or instead of the -mcpu= option. Permissible names are: armv4t, armv5t, armv5te, armv6, armv6j, armv6k, armv6kz, armv6t2, armv6z, armv6zk, armv7, armv7-a, armv7ve, armv8-a, armv8.1-a, armv8.2-a, armv8.3-a, armv8.4-a, armv8.5-a, armv7-r, armv8-r, armv6-m, armv6s-m, armv7-m, armv7e-m, armv8-m.base, armv8-m.main, iwmmxt and iwmmxt2. Additionally, the following architectures, which lack support for the Thumb execution state, are recognized but support is deprecated: armv4. Many of the architectures support extensions. These can be added by appending +extension to the architecture name. Extension options are processed in order and capabilities accumulate. An extension will also enable any necessary base extensions upon which it depends. For example, the +crypto extension will always enable the +simd extension. The exception to the additive construction is for extensions that are prefixed with +no...: these extensions disable the specified option and any other extensions that may depend on the presence of that extension. For example, -march=armv7-a+simd+nofp+vfpv4 is equivalent to writing -march=armv7-a+vfpv4 since the +simd option is entirely disabled by the +nofp option that follows it. Most extension names are generically named, but have an effect that is dependent upon the architecture to which it is applied. For example, the +simd option can be applied to both armv7-a and armv8-a architectures, but will enable the original ARMv7-A Advanced SIMD (Neon) extensions for armv7-a and the ARMv8-A variant for armv8-a. The table below lists the supported extensions for each architecture. Architectures not mentioned do not support any extensions. armv5te armv6 armv6j armv6k armv6kz armv6t2 armv6z armv6zk +fp The VFPv2 floating-point instructions. The extension +vfpv2 can be used as an alias for this extension. +nofp Disable the floating-point instructions. armv7 The common subset of the ARMv7-A, ARMv7-R and ARMv7-M architectures. +fp The VFPv3 floating-point instructions, with 16 double-precision registers. The extension +vfpv3-d16 can be used as an alias for this extension. Note that floating-point is not supported by the base ARMv7-M architecture, but is compatible with both the ARMv7-A and ARMv7-R architectures. +nofp Disable the floating-point instructions. armv7-a +mp The multiprocessing extension. +sec The security extension. +fp The VFPv3 floating-point instructions, with 16 double-precision registers. The extension +vfpv3-d16 can be used as an alias for this extension. +simd The Advanced SIMD (Neon) v1 and the VFPv3 floating- point instructions. The extensions +neon and +neon-vfpv3 can be used as aliases for this extension. +vfpv3 The VFPv3 floating-point instructions, with 32 double-precision registers. +vfpv3-d16-fp16 The VFPv3 floating-point instructions, with 16 double-precision registers and the half-precision floating-point conversion operations. +vfpv3-fp16 The VFPv3 floating-point instructions, with 32 double-precision registers and the half-precision floating-point conversion operations. +vfpv4-d16 The VFPv4 floating-point instructions, with 16 double-precision registers. +vfpv4 The VFPv4 floating-point instructions, with 32 double-precision registers. +neon-fp16 The Advanced SIMD (Neon) v1 and the VFPv3 floating- point instructions, with the half-precision floating- point conversion operations. +neon-vfpv4 The Advanced SIMD (Neon) v2 and the VFPv4 floating- point instructions. +nosimd Disable the Advanced SIMD instructions (does not disable floating point). +nofp Disable the floating-point and Advanced SIMD instructions. armv7ve The extended version of the ARMv7-A architecture with support for virtualization. +fp The VFPv4 floating-point instructions, with 16 double-precision registers. The extension +vfpv4-d16 can be used as an alias for this extension. +simd The Advanced SIMD (Neon) v2 and the VFPv4 floating- point instructions. The extension +neon-vfpv4 can be used as an alias for this extension. +vfpv3-d16 The VFPv3 floating-point instructions, with 16 double-precision registers. +vfpv3 The VFPv3 floating-point instructions, with 32 double-precision registers. +vfpv3-d16-fp16 The VFPv3 floating-point instructions, with 16 double-precision registers and the half-precision floating-point conversion operations. +vfpv3-fp16 The VFPv3 floating-point instructions, with 32 double-precision registers and the half-precision floating-point conversion operations. +vfpv4-d16 The VFPv4 floating-point instructions, with 16 double-precision registers. +vfpv4 The VFPv4 floating-point instructions, with 32 double-precision registers. +neon The Advanced SIMD (Neon) v1 and the VFPv3 floating- point instructions. The extension +neon-vfpv3 can be used as an alias for this extension. +neon-fp16 The Advanced SIMD (Neon) v1 and the VFPv3 floating- point instructions, with the half-precision floating- point conversion operations. +nosimd Disable the Advanced SIMD instructions (does not disable floating point). +nofp Disable the floating-point and Advanced SIMD instructions. armv8-a +crc The Cyclic Redundancy Check (CRC) instructions. +simd The ARMv8-A Advanced SIMD and floating-point instructions. +crypto The cryptographic instructions. +nocrypto Disable the cryptographic instructions. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. +sb Speculation Barrier Instruction. +predres Execution and Data Prediction Restriction Instructions. armv8.1-a +simd The ARMv8.1-A Advanced SIMD and floating-point instructions. +crypto The cryptographic instructions. This also enables the Advanced SIMD and floating-point instructions. +nocrypto Disable the cryptographic instructions. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. +sb Speculation Barrier Instruction. +predres Execution and Data Prediction Restriction Instructions. armv8.2-a armv8.3-a +fp16 The half-precision floating-point data processing instructions. This also enables the Advanced SIMD and floating-point instructions. +fp16fml The half-precision floating-point fmla extension. This also enables the half-precision floating-point extension and Advanced SIMD and floating-point instructions. +simd The ARMv8.1-A Advanced SIMD and floating-point instructions. +crypto The cryptographic instructions. This also enables the Advanced SIMD and floating-point instructions. +dotprod Enable the Dot Product extension. This also enables Advanced SIMD instructions. +nocrypto Disable the cryptographic extension. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. +sb Speculation Barrier Instruction. +predres Execution and Data Prediction Restriction Instructions. armv8.4-a +fp16 The half-precision floating-point data processing instructions. This also enables the Advanced SIMD and floating-point instructions as well as the Dot Product extension and the half-precision floating- point fmla extension. +simd The ARMv8.3-A Advanced SIMD and floating-point instructions as well as the Dot Product extension. +crypto The cryptographic instructions. This also enables the Advanced SIMD and floating-point instructions as well as the Dot Product extension. +nocrypto Disable the cryptographic extension. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. +sb Speculation Barrier Instruction. +predres Execution and Data Prediction Restriction Instructions. armv8.5-a +fp16 The half-precision floating-point data processing instructions. This also enables the Advanced SIMD and floating-point instructions as well as the Dot Product extension and the half-precision floating- point fmla extension. +simd The ARMv8.3-A Advanced SIMD and floating-point instructions as well as the Dot Product extension. +crypto The cryptographic instructions. This also enables the Advanced SIMD and floating-point instructions as well as the Dot Product extension. +nocrypto Disable the cryptographic extension. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. armv7-r +fp.sp The single-precision VFPv3 floating-point instructions. The extension +vfpv3xd can be used as an alias for this extension. +fp The VFPv3 floating-point instructions with 16 double- precision registers. The extension +vfpv3-d16 can be used as an alias for this extension. +vfpv3xd-d16-fp16 The single-precision VFPv3 floating-point instructions with 16 double-precision registers and the half-precision floating-point conversion operations. +vfpv3-d16-fp16 The VFPv3 floating-point instructions with 16 double- precision registers and the half-precision floating- point conversion operations. +nofp Disable the floating-point extension. +idiv The ARM-state integer division instructions. +noidiv Disable the ARM-state integer division extension. armv7e-m +fp The single-precision VFPv4 floating-point instructions. +fpv5 The single-precision FPv5 floating-point instructions. +fp.dp The single- and double-precision FPv5 floating-point instructions. +nofp Disable the floating-point extensions. armv8-m.main +dsp The DSP instructions. +nodsp Disable the DSP extension. +fp The single-precision floating-point instructions. +fp.dp The single- and double-precision floating-point instructions. +nofp Disable the floating-point extension. armv8-r +crc The Cyclic Redundancy Check (CRC) instructions. +fp.sp The single-precision FPv5 floating-point instructions. +simd The ARMv8-A Advanced SIMD and floating-point instructions. +crypto The cryptographic instructions. +nocrypto Disable the cryptographic instructions. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. -march=native causes the compiler to auto-detect the architecture of the build computer. At present, this feature is only supported on GNU/Linux, and not all architectures are recognized. If the auto-detect is unsuccessful the option has no effect. -mtune=name This option specifies the name of the target ARM processor for which GCC should tune the performance of the code. For some ARM implementations better performance can be obtained by using this option. Permissible names are: arm7tdmi, arm7tdmi-s, arm710t, arm720t, arm740t, strongarm, strongarm110, strongarm1100, 0strongarm1110, arm8, arm810, arm9, arm9e, arm920, arm920t, arm922t, arm946e-s, arm966e-s, arm968e-s, arm926ej-s, arm940t, arm9tdmi, arm10tdmi, arm1020t, arm1026ej-s, arm10e, arm1020e, arm1022e, arm1136j-s, arm1136jf-s, mpcore, mpcorenovfp, arm1156t2-s, arm1156t2f-s, arm1176jz-s, arm1176jzf-s, generic-armv7-a, cortex-a5, cortex-a7, cortex-a8, cortex-a9, cortex-a12, cortex-a15, cortex-a17, cortex-a32, cortex-a35, cortex-a53, cortex-a55, cortex-a57, cortex-a72, cortex-a73, cortex-a75, cortex-a76, ares, cortex-r4, cortex-r4f, cortex-r5, cortex-r7, cortex-r8, cortex-r52, cortex-m0, cortex-m0plus, cortex-m1, cortex-m3, cortex-m4, cortex-m7, cortex-m23, cortex-m33, cortex-m1.small-multiply, cortex-m0.small-multiply, cortex-m0plus.small-multiply, exynos-m1, marvell-pj4, neoverse-n1, neoverse-n2, neoverse-v1, xscale, iwmmxt, iwmmxt2, ep9312, fa526, fa626, fa606te, fa626te, fmp626, fa726te, xgene1. Additionally, this option can specify that GCC should tune the performance of the code for a big.LITTLE system. Permissible names are: cortex-a15.cortex-a7, cortex-a17.cortex-a7, cortex-a57.cortex-a53, cortex-a72.cortex-a53, cortex-a72.cortex-a35, cortex-a73.cortex-a53, cortex-a75.cortex-a55, cortex-a76.cortex-a55. -mtune=generic-arch specifies that GCC should tune the performance for a blend of processors within architecture arch. The aim is to generate code that run well on the current most popular processors, balancing between optimizations that benefit some CPUs in the range, and avoiding performance pitfalls of other CPUs. The effects of this option may change in future GCC versions as CPU models come and go. -mtune permits the same extension options as -mcpu, but the extension options do not affect the tuning of the generated code. -mtune=native causes the compiler to auto-detect the CPU of the build computer. At present, this feature is only supported on GNU/Linux, and not all architectures are recognized. If the auto-detect is unsuccessful the option has no effect. -mcpu=name[+extension...] This specifies the name of the target ARM processor. GCC uses this name to derive the name of the target ARM architecture (as if specified by -march) and the ARM processor type for which to tune for performance (as if specified by -mtune). Where this option is used in conjunction with -march or -mtune, those options take precedence over the appropriate part of this option. Many of the supported CPUs implement optional architectural extensions. Where this is so the architectural extensions are normally enabled by default. If implementations that lack the extension exist, then the extension syntax can be used to disable those extensions that have been omitted. For floating-point and Advanced SIMD (Neon) instructions, the settings of the options -mfloat-abi and -mfpu must also be considered: floating-point and Advanced SIMD instructions will only be used if -mfloat-abi is not set to soft; and any setting of -mfpu other than auto will override the available floating-point and SIMD extension instructions. For example, cortex-a9 can be found in three major configurations: integer only, with just a floating-point unit or with floating-point and Advanced SIMD. The default is to enable all the instructions, but the extensions +nosimd and +nofp can be used to disable just the SIMD or both the SIMD and floating-point instructions respectively. Permissible names for this option are the same as those for -mtune. The following extension options are common to the listed CPUs: +nodsp Disable the DSP instructions on cortex-m33. +nofp Disables the floating-point instructions on arm9e, arm946e-s, arm966e-s, arm968e-s, arm10e, arm1020e, arm1022e, arm926ej-s, arm1026ej-s, cortex-r5, cortex-r7, cortex-r8, cortex-m4, cortex-m7 and cortex-m33. Disables the floating-point and SIMD instructions on generic-armv7-a, cortex-a5, cortex-a7, cortex-a8, cortex-a9, cortex-a12, cortex-a15, cortex-a17, cortex-a15.cortex-a7, cortex-a17.cortex-a7, cortex-a32, cortex-a35, cortex-a53 and cortex-a55. +nofp.dp Disables the double-precision component of the floating- point instructions on cortex-r5, cortex-r7, cortex-r8, cortex-r52 and cortex-m7. +nosimd Disables the SIMD (but not floating-point) instructions on generic-armv7-a, cortex-a5, cortex-a7 and cortex-a9. +crypto Enables the cryptographic instructions on cortex-a32, cortex-a35, cortex-a53, cortex-a55, cortex-a57, cortex-a72, cortex-a73, cortex-a75, exynos-m1, xgene1, cortex-a57.cortex-a53, cortex-a72.cortex-a53, cortex-a73.cortex-a35, cortex-a73.cortex-a53 and cortex-a75.cortex-a55. Additionally the generic-armv7-a pseudo target defaults to VFPv3 with 16 double-precision registers. It supports the following extension options: mp, sec, vfpv3-d16, vfpv3, vfpv3-d16-fp16, vfpv3-fp16, vfpv4-d16, vfpv4, neon, neon-vfpv3, neon-fp16, neon-vfpv4. The meanings are the same as for the extensions to -march=armv7-a. -mcpu=generic-arch is also permissible, and is equivalent to -march=arch -mtune=generic-arch. See -mtune for more information. -mcpu=native causes the compiler to auto-detect the CPU of the build computer. At present, this feature is only supported on GNU/Linux, and not all architectures are recognized. If the auto-detect is unsuccessful the option has no effect. -mfpu=name This specifies what floating-point hardware (or hardware emulation) is available on the target. Permissible names are: auto, vfpv2, vfpv3, vfpv3-fp16, vfpv3-d16, vfpv3-d16-fp16, vfpv3xd, vfpv3xd-fp16, neon-vfpv3, neon-fp16, vfpv4, vfpv4-d16, fpv4-sp-d16, neon-vfpv4, fpv5-d16, fpv5-sp-d16, fp-armv8, neon-fp-armv8 and crypto-neon-fp-armv8. Note that neon is an alias for neon-vfpv3 and vfp is an alias for vfpv2. The setting auto is the default and is special. It causes the compiler to select the floating-point and Advanced SIMD instructions based on the settings of -mcpu and -march. If the selected floating-point hardware includes the NEON extension (e.g. -mfpu=neon), note that floating-point operations are not generated by GCC's auto-vectorization pass unless -funsafe-math-optimizations is also specified. This is because NEON hardware does not fully implement the IEEE 754 standard for floating-point arithmetic (in particular denormal values are treated as zero), so the use of NEON instructions may lead to a loss of precision. You can also set the fpu name at function level by using the "target("fpu=")" function attributes or pragmas. -mfp16-format=name Specify the format of the "__fp16" half-precision floating- point type. Permissible names are none, ieee, and alternative; the default is none, in which case the "__fp16" type is not defined. -mstructure-size-boundary=n The sizes of all structures and unions are rounded up to a multiple of the number of bits set by this option. Permissible values are 8, 32 and 64. The default value varies for different toolchains. For the COFF targeted toolchain the default value is 8. A value of 64 is only allowed if the underlying ABI supports it. Specifying a larger number can produce faster, more efficient code, but can also increase the size of the program. Different values are potentially incompatible. Code compiled with one value cannot necessarily expect to work with code or libraries compiled with another value, if they exchange information using structures or unions. This option is deprecated. -mabort-on-noreturn Generate a call to the function "abort" at the end of a "noreturn" function. It is executed if the function tries to return. -mlong-calls -mno-long-calls Tells the compiler to perform function calls by first loading the address of the function into a register and then performing a subroutine call on this register. This switch is needed if the target function lies outside of the 64-megabyte addressing range of the offset-based version of subroutine call instruction. Even if this switch is enabled, not all function calls are turned into long calls. The heuristic is that static functions, functions that have the "short_call" attribute, functions that are inside the scope of a "#pragma no_long_calls" directive, and functions whose definitions have already been compiled within the current compilation unit are not turned into long calls. The exceptions to this rule are that weak function definitions, functions with the "long_call" attribute or the "section" attribute, and functions that are within the scope of a "#pragma long_calls" directive are always turned into long calls. This feature is not enabled by default. Specifying -mno-long-calls restores the default behavior, as does placing the function calls within the scope of a "#pragma long_calls_off" directive. Note these switches have no effect on how the compiler generates code to handle function calls via function pointers. -msingle-pic-base Treat the register used for PIC addressing as read-only, rather than loading it in the prologue for each function. The runtime system is responsible for initializing this register with an appropriate value before execution begins. -mpic-register=reg Specify the register to be used for PIC addressing. For standard PIC base case, the default is any suitable register determined by compiler. For single PIC base case, the default is R9 if target is EABI based or stack-checking is enabled, otherwise the default is R10. -mpic-data-is-text-relative Assume that the displacement between the text and data segments is fixed at static link time. This permits using PC-relative addressing operations to access data known to be in the data segment. For non-VxWorks RTP targets, this option is enabled by default. When disabled on such targets, it will enable -msingle-pic-base by default. -mpoke-function-name Write the name of each function into the text section, directly preceding the function prologue. The generated code is similar to this: t0 .ascii "arm_poke_function_name", 0 .align t1 .word 0xff000000 + (t1 - t0) arm_poke_function_name mov ip, sp stmfd sp!, {fp, ip, lr, pc} sub fp, ip, #4 When performing a stack backtrace, code can inspect the value of "pc" stored at "fp + 0". If the trace function then looks at location "pc - 12" and the top 8 bits are set, then we know that there is a function name embedded immediately preceding this location and has length "((pc[-3]) & 0xff000000)". -mthumb -marm Select between generating code that executes in ARM and Thumb states. The default for most configurations is to generate code that executes in ARM state, but the default can be changed by configuring GCC with the --with-mode=state configure option. You can also override the ARM and Thumb mode for each function by using the "target("thumb")" and "target("arm")" function attributes or pragmas. -mflip-thumb Switch ARM/Thumb modes on alternating functions. This option is provided for regression testing of mixed Thumb/ARM code generation, and is not intended for ordinary use in compiling code. -mtpcs-frame Generate a stack frame that is compliant with the Thumb Procedure Call Standard for all non-leaf functions. (A leaf function is one that does not call any other functions.) The default is -mno-tpcs-frame. -mtpcs-leaf-frame Generate a stack frame that is compliant with the Thumb Procedure Call Standard for all leaf functions. (A leaf function is one that does not call any other functions.) The default is -mno-apcs-leaf-frame. -mcallee-super-interworking Gives all externally visible functions in the file being compiled an ARM instruction set header which switches to Thumb mode before executing the rest of the function. This allows these functions to be called from non-interworking code. This option is not valid in AAPCS configurations because interworking is enabled by default. -mcaller-super-interworking Allows calls via function pointers (including virtual functions) to execute correctly regardless of whether the target code has been compiled for interworking or not. There is a small overhead in the cost of executing a function pointer if this option is enabled. This option is not valid in AAPCS configurations because interworking is enabled by default. -mtp=name Specify the access model for the thread local storage pointer. The valid models are soft, which generates calls to "__aeabi_read_tp", cp15, which fetches the thread pointer from "cp15" directly (supported in the arm6k architecture), and auto, which uses the best available method for the selected processor. The default setting is auto. -mtls-dialect=dialect Specify the dialect to use for accessing thread local storage. Two dialects are supported---gnu and gnu2. The gnu dialect selects the original GNU scheme for supporting local and global dynamic TLS models. The gnu2 dialect selects the GNU descriptor scheme, which provides better performance for shared libraries. The GNU descriptor scheme is compatible with the original scheme, but does require new assembler, linker and library support. Initial and local exec TLS models are unaffected by this option and always use the original scheme. -mword-relocations Only generate absolute relocations on word-sized values (i.e. R_ARM_ABS32). This is enabled by default on targets (uClinux, SymbianOS) where the runtime loader imposes this restriction, and when -fpic or -fPIC is specified. This option conflicts with -mslow-flash-data. -mfix-cortex-m3-ldrd Some Cortex-M3 cores can cause data corruption when "ldrd" instructions with overlapping destination and base registers are used. This option avoids generating these instructions. This option is enabled by default when -mcpu=cortex-m3 is specified. -munaligned-access -mno-unaligned-access Enables (or disables) reading and writing of 16- and 32- bit values from addresses that are not 16- or 32- bit aligned. By default unaligned access is disabled for all pre-ARMv6, all ARMv6-M and for ARMv8-M Baseline architectures, and enabled for all other architectures. If unaligned access is not enabled then words in packed data structures are accessed a byte at a time. The ARM attribute "Tag_CPU_unaligned_access" is set in the generated object file to either true or false, depending upon the setting of this option. If unaligned access is enabled then the preprocessor symbol "__ARM_FEATURE_UNALIGNED" is also defined. -mneon-for-64bits Enables using Neon to handle scalar 64-bits operations. This is disabled by default since the cost of moving data from core registers to Neon is high. -mslow-flash-data Assume loading data from flash is slower than fetching instruction. Therefore literal load is minimized for better performance. This option is only supported when compiling for ARMv7 M-profile and off by default. It conflicts with -mword-relocations. -masm-syntax-unified Assume inline assembler is using unified asm syntax. The default is currently off which implies divided syntax. This option has no impact on Thumb2. However, this may change in future releases of GCC. Divided syntax should be considered deprecated. -mrestrict-it Restricts generation of IT blocks to conform to the rules of ARMv8-A. IT blocks can only contain a single 16-bit instruction from a select set of instructions. This option is on by default for ARMv8-A Thumb mode. -mprint-tune-info Print CPU tuning information as comment in assembler file. This is an option used only for regression testing of the compiler and not intended for ordinary use in compiling code. This option is disabled by default. -mverbose-cost-dump Enable verbose cost model dumping in the debug dump files. This option is provided for use in debugging the compiler. -mpure-code Do not allow constant data to be placed in code sections. Additionally, when compiling for ELF object format give all text sections the ELF processor-specific section attribute "SHF_ARM_PURECODE". This option is only available when generating non-pic code for M-profile targets. -mcmse Generate secure code as per the "ARMv8-M Security Extensions: Requirements on Development Tools Engineering Specification", which can be found on <https://developer.arm.com/documentation/ecm0359818/latest/ >. AVR Options These options are defined for AVR implementations: -mmcu=mcu Specify Atmel AVR instruction set architectures (ISA) or MCU type. The default for this option is avr2. GCC supports the following AVR devices and ISAs: "avr2" "Classic" devices with up to 8 KiB of program memory. mcu = "attiny22", "attiny26", "at90s2313", "at90s2323", "at90s2333", "at90s2343", "at90s4414", "at90s4433", "at90s4434", "at90c8534", "at90s8515", "at90s8535". "avr25" "Classic" devices with up to 8 KiB of program memory and with the "MOVW" instruction. mcu = "attiny13", "attiny13a", "attiny24", "attiny24a", "attiny25", "attiny261", "attiny261a", "attiny2313", "attiny2313a", "attiny43u", "attiny44", "attiny44a", "attiny45", "attiny48", "attiny441", "attiny461", "attiny461a", "attiny4313", "attiny84", "attiny84a", "attiny85", "attiny87", "attiny88", "attiny828", "attiny841", "attiny861", "attiny861a", "ata5272", "ata6616c", "at86rf401". "avr3" "Classic" devices with 16 KiB up to 64 KiB of program memory. mcu = "at76c711", "at43usb355". "avr31" "Classic" devices with 128 KiB of program memory. mcu = "atmega103", "at43usb320". "avr35" "Classic" devices with 16 KiB up to 64 KiB of program memory and with the "MOVW" instruction. mcu = "attiny167", "attiny1634", "atmega8u2", "atmega16u2", "atmega32u2", "ata5505", "ata6617c", "ata664251", "at90usb82", "at90usb162". "avr4" "Enhanced" devices with up to 8 KiB of program memory. mcu = "atmega48", "atmega48a", "atmega48p", "atmega48pa", "atmega48pb", "atmega8", "atmega8a", "atmega8hva", "atmega88", "atmega88a", "atmega88p", "atmega88pa", "atmega88pb", "atmega8515", "atmega8535", "ata6285", "ata6286", "ata6289", "ata6612c", "at90pwm1", "at90pwm2", "at90pwm2b", "at90pwm3", "at90pwm3b", "at90pwm81". "avr5" "Enhanced" devices with 16 KiB up to 64 KiB of program memory. mcu = "atmega16", "atmega16a", "atmega16hva", "atmega16hva2", "atmega16hvb", "atmega16hvbrevb", "atmega16m1", "atmega16u4", "atmega161", "atmega162", "atmega163", "atmega164a", "atmega164p", "atmega164pa", "atmega165", "atmega165a", "atmega165p", "atmega165pa", "atmega168", "atmega168a", "atmega168p", "atmega168pa", "atmega168pb", "atmega169", "atmega169a", "atmega169p", "atmega169pa", "atmega32", "atmega32a", "atmega32c1", "atmega32hvb", "atmega32hvbrevb", "atmega32m1", "atmega32u4", "atmega32u6", "atmega323", "atmega324a", "atmega324p", "atmega324pa", "atmega325", "atmega325a", "atmega325p", "atmega325pa", "atmega328", "atmega328p", "atmega328pb", "atmega329", "atmega329a", "atmega329p", "atmega329pa", "atmega3250", "atmega3250a", "atmega3250p", "atmega3250pa", "atmega3290", "atmega3290a", "atmega3290p", "atmega3290pa", "atmega406", "atmega64", "atmega64a", "atmega64c1", "atmega64hve", "atmega64hve2", "atmega64m1", "atmega64rfr2", "atmega640", "atmega644", "atmega644a", "atmega644p", "atmega644pa", "atmega644rfr2", "atmega645", "atmega645a", "atmega645p", "atmega649", "atmega649a", "atmega649p", "atmega6450", "atmega6450a", "atmega6450p", "atmega6490", "atmega6490a", "atmega6490p", "ata5795", "ata5790", "ata5790n", "ata5791", "ata6613c", "ata6614q", "ata5782", "ata5831", "ata8210", "ata8510", "ata5702m322", "at90pwm161", "at90pwm216", "at90pwm316", "at90can32", "at90can64", "at90scr100", "at90usb646", "at90usb647", "at94k", "m3000". "avr51" "Enhanced" devices with 128 KiB of program memory. mcu = "atmega128", "atmega128a", "atmega128rfa1", "atmega128rfr2", "atmega1280", "atmega1281", "atmega1284", "atmega1284p", "atmega1284rfr2", "at90can128", "at90usb1286", "at90usb1287". "avr6" "Enhanced" devices with 3-byte PC, i.e. with more than 128 KiB of program memory. mcu = "atmega256rfr2", "atmega2560", "atmega2561", "atmega2564rfr2". "avrxmega2" "XMEGA" devices with more than 8 KiB and up to 64 KiB of program memory. mcu = "atxmega8e5", "atxmega16a4", "atxmega16a4u", "atxmega16c4", "atxmega16d4", "atxmega16e5", "atxmega32a4", "atxmega32a4u", "atxmega32c3", "atxmega32c4", "atxmega32d3", "atxmega32d4", "atxmega32e5". "avrxmega3" "XMEGA" devices with up to 64 KiB of combined program memory and RAM, and with program memory visible in the RAM address space. mcu = "attiny202", "attiny204", "attiny212", "attiny214", "attiny402", "attiny404", "attiny406", "attiny412", "attiny414", "attiny416", "attiny417", "attiny804", "attiny806", "attiny807", "attiny814", "attiny816", "attiny817", "attiny1604", "attiny1606", "attiny1607", "attiny1614", "attiny1616", "attiny1617", "attiny3214", "attiny3216", "attiny3217", "atmega808", "atmega809", "atmega1608", "atmega1609", "atmega3208", "atmega3209", "atmega4808", "atmega4809". "avrxmega4" "XMEGA" devices with more than 64 KiB and up to 128 KiB of program memory. mcu = "atxmega64a3", "atxmega64a3u", "atxmega64a4u", "atxmega64b1", "atxmega64b3", "atxmega64c3", "atxmega64d3", "atxmega64d4". "avrxmega5" "XMEGA" devices with more than 64 KiB and up to 128 KiB of program memory and more than 64 KiB of RAM. mcu = "atxmega64a1", "atxmega64a1u". "avrxmega6" "XMEGA" devices with more than 128 KiB of program memory. mcu = "atxmega128a3", "atxmega128a3u", "atxmega128b1", "atxmega128b3", "atxmega128c3", "atxmega128d3", "atxmega128d4", "atxmega192a3", "atxmega192a3u", "atxmega192c3", "atxmega192d3", "atxmega256a3", "atxmega256a3b", "atxmega256a3bu", "atxmega256a3u", "atxmega256c3", "atxmega256d3", "atxmega384c3", "atxmega384d3". "avrxmega7" "XMEGA" devices with more than 128 KiB of program memory and more than 64 KiB of RAM. mcu = "atxmega128a1", "atxmega128a1u", "atxmega128a4u". "avrtiny" "TINY" Tiny core devices with 512 B up to 4 KiB of program memory. mcu = "attiny4", "attiny5", "attiny9", "attiny10", "attiny20", "attiny40". "avr1" This ISA is implemented by the minimal AVR core and supported for assembler only. mcu = "attiny11", "attiny12", "attiny15", "attiny28", "at90s1200". -mabsdata Assume that all data in static storage can be accessed by LDS / STS instructions. This option has only an effect on reduced Tiny devices like ATtiny40. See also the "absdata" AVR Variable Attributes,variable attribute. -maccumulate-args Accumulate outgoing function arguments and acquire/release the needed stack space for outgoing function arguments once in function prologue/epilogue. Without this option, outgoing arguments are pushed before calling a function and popped afterwards. Popping the arguments after the function call can be expensive on AVR so that accumulating the stack space might lead to smaller executables because arguments need not be removed from the stack after such a function call. This option can lead to reduced code size for functions that perform several calls to functions that get their arguments on the stack like calls to printf-like functions. -mbranch-cost=cost Set the branch costs for conditional branch instructions to cost. Reasonable values for cost are small, non-negative integers. The default branch cost is 0. -mcall-prologues Functions prologues/epilogues are expanded as calls to appropriate subroutines. Code size is smaller. -mgas-isr-prologues Interrupt service routines (ISRs) may use the "__gcc_isr" pseudo instruction supported by GNU Binutils. If this option is on, the feature can still be disabled for individual ISRs by means of the AVR Function Attributes,,"no_gccisr" function attribute. This feature is activated per default if optimization is on (but not with -Og, @pxref{Optimize Options}), and if GNU Binutils support PR21683 ("https://sourceware.org/PR21683"). -mint8 Assume "int" to be 8-bit integer. This affects the sizes of all types: a "char" is 1 byte, an "int" is 1 byte, a "long" is 2 bytes, and "long long" is 4 bytes. Please note that this option does not conform to the C standards, but it results in smaller code size. -mmain-is-OS_task Do not save registers in "main". The effect is the same like attaching attribute AVR Function Attributes,,"OS_task" to "main". It is activated per default if optimization is on. -mn-flash=num Assume that the flash memory has a size of num times 64 KiB. -mno-interrupts Generated code is not compatible with hardware interrupts. Code size is smaller. -mrelax Try to replace "CALL" resp. "JMP" instruction by the shorter "RCALL" resp. "RJMP" instruction if applicable. Setting -mrelax just adds the --mlink-relax option to the assembler's command line and the --relax option to the linker's command line. Jump relaxing is performed by the linker because jump offsets are not known before code is located. Therefore, the assembler code generated by the compiler is the same, but the instructions in the executable may differ from instructions in the assembler code. Relaxing must be turned on if linker stubs are needed, see the section on "EIND" and linker stubs below. -mrmw Assume that the device supports the Read-Modify-Write instructions "XCH", "LAC", "LAS" and "LAT". -mshort-calls Assume that "RJMP" and "RCALL" can target the whole program memory. This option is used internally for multilib selection. It is not an optimization option, and you don't need to set it by hand. -msp8 Treat the stack pointer register as an 8-bit register, i.e. assume the high byte of the stack pointer is zero. In general, you don't need to set this option by hand. This option is used internally by the compiler to select and build multilibs for architectures "avr2" and "avr25". These architectures mix devices with and without "SPH". For any setting other than -mmcu=avr2 or -mmcu=avr25 the compiler driver adds or removes this option from the compiler proper's command line, because the compiler then knows if the device or architecture has an 8-bit stack pointer and thus no "SPH" register or not. -mstrict-X Use address register "X" in a way proposed by the hardware. This means that "X" is only used in indirect, post-increment or pre-decrement addressing. Without this option, the "X" register may be used in the same way as "Y" or "Z" which then is emulated by additional instructions. For example, loading a value with "X+const" addressing with a small non-negative "const < 64" to a register Rn is performed as adiw r26, const ; X += const ld <Rn>, X ; <Rn> = *X sbiw r26, const ; X -= const -mtiny-stack Only change the lower 8 bits of the stack pointer. -mfract-convert-truncate Allow to use truncation instead of rounding towards zero for fractional fixed-point types. -nodevicelib Don't link against AVR-LibC's device specific library "lib<mcu>.a". -nodevicespecs Don't add -specs=device-specs/specs-<mcu> to the compiler driver's command line. The user takes responsibility for supplying the sub-processes like compiler proper, assembler and linker with appropriate command line options. -Waddr-space-convert Warn about conversions between address spaces in the case where the resulting address space is not contained in the incoming address space. -Wmisspelled-isr Warn if the ISR is misspelled, i.e. without __vector prefix. Enabled by default. "EIND" and Devices with More Than 128 Ki Bytes of Flash Pointers in the implementation are 16 bits wide. The address of a function or label is represented as word address so that indirect jumps and calls can target any code address in the range of 64 Ki words. In order to facilitate indirect jump on devices with more than 128 Ki bytes of program memory space, there is a special function register called "EIND" that serves as most significant part of the target address when "EICALL" or "EIJMP" instructions are used. Indirect jumps and calls on these devices are handled as follows by the compiler and are subject to some limitations: * The compiler never sets "EIND". * The compiler uses "EIND" implicitly in "EICALL"/"EIJMP" instructions or might read "EIND" directly in order to emulate an indirect call/jump by means of a "RET" instruction. * The compiler assumes that "EIND" never changes during the startup code or during the application. In particular, "EIND" is not saved/restored in function or interrupt service routine prologue/epilogue. * For indirect calls to functions and computed goto, the linker generates stubs. Stubs are jump pads sometimes also called trampolines. Thus, the indirect call/jump jumps to such a stub. The stub contains a direct jump to the desired address. * Linker relaxation must be turned on so that the linker generates the stubs correctly in all situations. See the compiler option -mrelax and the linker option --relax. There are corner cases where the linker is supposed to generate stubs but aborts without relaxation and without a helpful error message. * The default linker script is arranged for code with "EIND = 0". If code is supposed to work for a setup with "EIND != 0", a custom linker script has to be used in order to place the sections whose name start with ".trampolines" into the segment where "EIND" points to. * The startup code from libgcc never sets "EIND". Notice that startup code is a blend of code from libgcc and AVR-LibC. For the impact of AVR-LibC on "EIND", see the AVR- LibC user manual ("http://nongnu.org/avr-libc/user-manual/"). * It is legitimate for user-specific startup code to set up "EIND" early, for example by means of initialization code located in section ".init3". Such code runs prior to general startup code that initializes RAM and calls constructors, but after the bit of startup code from AVR-LibC that sets "EIND" to the segment where the vector table is located. #include <avr/io.h> static void __attribute__((section(".init3"),naked,used,no_instrument_function)) init3_set_eind (void) { __asm volatile ("ldi r24,pm_hh8(__trampolines_start)\n\t" "out %i0,r24" :: "n" (&EIND) : "r24","memory"); } The "__trampolines_start" symbol is defined in the linker script. * Stubs are generated automatically by the linker if the following two conditions are met: -<The address of a label is taken by means of the "gs" modifier> (short for generate stubs) like so: LDI r24, lo8(gs(<func>)) LDI r25, hi8(gs(<func>)) -<The final location of that label is in a code segment> outside the segment where the stubs are located. * The compiler emits such "gs" modifiers for code labels in the following situations: -<Taking address of a function or code label.> -<Computed goto.> -<If prologue-save function is used, see -mcall-prologues> command-line option. -<Switch/case dispatch tables. If you do not want such dispatch> tables you can specify the -fno-jump-tables command-line option. -<C and C++ constructors/destructors called during startup/shutdown.> -<If the tools hit a "gs()" modifier explained above.> * Jumping to non-symbolic addresses like so is not supported: int main (void) { /* Call function at word address 0x2 */ return ((int(*)(void)) 0x2)(); } Instead, a stub has to be set up, i.e. the function has to be called through a symbol ("func_4" in the example): int main (void) { extern int func_4 (void); /* Call function at byte address 0x4 */ return func_4(); } and the application be linked with -Wl,--defsym,func_4=0x4. Alternatively, "func_4" can be defined in the linker script. Handling of the "RAMPD", "RAMPX", "RAMPY" and "RAMPZ" Special Function Registers Some AVR devices support memories larger than the 64 KiB range that can be accessed with 16-bit pointers. To access memory locations outside this 64 KiB range, the content of a "RAMP" register is used as high part of the address: The "X", "Y", "Z" address register is concatenated with the "RAMPX", "RAMPY", "RAMPZ" special function register, respectively, to get a wide address. Similarly, "RAMPD" is used together with direct addressing. * The startup code initializes the "RAMP" special function registers with zero. * If a AVR Named Address Spaces,named address space other than generic or "__flash" is used, then "RAMPZ" is set as needed before the operation. * If the device supports RAM larger than 64 KiB and the compiler needs to change "RAMPZ" to accomplish an operation, "RAMPZ" is reset to zero after the operation. * If the device comes with a specific "RAMP" register, the ISR prologue/epilogue saves/restores that SFR and initializes it with zero in case the ISR code might (implicitly) use it. * RAM larger than 64 KiB is not supported by GCC for AVR targets. If you use inline assembler to read from locations outside the 16-bit address range and change one of the "RAMP" registers, you must reset it to zero after the access. AVR Built-in Macros GCC defines several built-in macros so that the user code can test for the presence or absence of features. Almost any of the following built-in macros are deduced from device capabilities and thus triggered by the -mmcu= command-line option. For even more AVR-specific built-in macros see AVR Named Address Spaces and AVR Built-in Functions. "__AVR_ARCH__" Build-in macro that resolves to a decimal number that identifies the architecture and depends on the -mmcu=mcu option. Possible values are: 2, 25, 3, 31, 35, 4, 5, 51, 6 for mcu="avr2", "avr25", "avr3", "avr31", "avr35", "avr4", "avr5", "avr51", "avr6", respectively and 100, 102, 103, 104, 105, 106, 107 for mcu="avrtiny", "avrxmega2", "avrxmega3", "avrxmega4", "avrxmega5", "avrxmega6", "avrxmega7", respectively. If mcu specifies a device, this built-in macro is set accordingly. For example, with -mmcu=atmega8 the macro is defined to 4. "__AVR_Device__" Setting -mmcu=device defines this built-in macro which reflects the device's name. For example, -mmcu=atmega8 defines the built-in macro "__AVR_ATmega8__", -mmcu=attiny261a defines "__AVR_ATtiny261A__", etc. The built-in macros' names follow the scheme "__AVR_Device__" where Device is the device name as from the AVR user manual. The difference between Device in the built-in macro and device in -mmcu=device is that the latter is always lowercase. If device is not a device but only a core architecture like avr51, this macro is not defined. "__AVR_DEVICE_NAME__" Setting -mmcu=device defines this built-in macro to the device's name. For example, with -mmcu=atmega8 the macro is defined to "atmega8". If device is not a device but only a core architecture like avr51, this macro is not defined. "__AVR_XMEGA__" The device / architecture belongs to the XMEGA family of devices. "__AVR_HAVE_ELPM__" The device has the "ELPM" instruction. "__AVR_HAVE_ELPMX__" The device has the "ELPM Rn,Z" and "ELPM Rn,Z+" instructions. "__AVR_HAVE_MOVW__" The device has the "MOVW" instruction to perform 16-bit register-register moves. "__AVR_HAVE_LPMX__" The device has the "LPM Rn,Z" and "LPM Rn,Z+" instructions. "__AVR_HAVE_MUL__" The device has a hardware multiplier. "__AVR_HAVE_JMP_CALL__" The device has the "JMP" and "CALL" instructions. This is the case for devices with more than 8 KiB of program memory. "__AVR_HAVE_EIJMP_EICALL__" "__AVR_3_BYTE_PC__" The device has the "EIJMP" and "EICALL" instructions. This is the case for devices with more than 128 KiB of program memory. This also means that the program counter (PC) is 3 bytes wide. "__AVR_2_BYTE_PC__" The program counter (PC) is 2 bytes wide. This is the case for devices with up to 128 KiB of program memory. "__AVR_HAVE_8BIT_SP__" "__AVR_HAVE_16BIT_SP__" The stack pointer (SP) register is treated as 8-bit respectively 16-bit register by the compiler. The definition of these macros is affected by -mtiny-stack. "__AVR_HAVE_SPH__" "__AVR_SP8__" The device has the SPH (high part of stack pointer) special function register or has an 8-bit stack pointer, respectively. The definition of these macros is affected by -mmcu= and in the cases of -mmcu=avr2 and -mmcu=avr25 also by -msp8. "__AVR_HAVE_RAMPD__" "__AVR_HAVE_RAMPX__" "__AVR_HAVE_RAMPY__" "__AVR_HAVE_RAMPZ__" The device has the "RAMPD", "RAMPX", "RAMPY", "RAMPZ" special function register, respectively. "__NO_INTERRUPTS__" This macro reflects the -mno-interrupts command-line option. "__AVR_ERRATA_SKIP__" "__AVR_ERRATA_SKIP_JMP_CALL__" Some AVR devices (AT90S8515, ATmega103) must not skip 32-bit instructions because of a hardware erratum. Skip instructions are "SBRS", "SBRC", "SBIS", "SBIC" and "CPSE". The second macro is only defined if "__AVR_HAVE_JMP_CALL__" is also set. "__AVR_ISA_RMW__" The device has Read-Modify-Write instructions (XCH, LAC, LAS and LAT). "__AVR_SFR_OFFSET__=offset" Instructions that can address I/O special function registers directly like "IN", "OUT", "SBI", etc. may use a different address as if addressed by an instruction to access RAM like "LD" or "STS". This offset depends on the device architecture and has to be subtracted from the RAM address in order to get the respective I/O address. "__AVR_SHORT_CALLS__" The -mshort-calls command line option is set. "__AVR_PM_BASE_ADDRESS__=addr" Some devices support reading from flash memory by means of "LD*" instructions. The flash memory is seen in the data address space at an offset of "__AVR_PM_BASE_ADDRESS__". If this macro is not defined, this feature is not available. If defined, the address space is linear and there is no need to put ".rodata" into RAM. This is handled by the default linker description file, and is currently available for "avrtiny" and "avrxmega3". Even more convenient, there is no need to use address spaces like "__flash" or features like attribute "progmem" and "pgm_read_*". "__WITH_AVRLIBC__" The compiler is configured to be used together with AVR-Libc. See the --with-avrlibc configure option. Blackfin Options -mcpu=cpu[-sirevision] Specifies the name of the target Blackfin processor. Currently, cpu can be one of bf512, bf514, bf516, bf518, bf522, bf523, bf524, bf525, bf526, bf527, bf531, bf532, bf533, bf534, bf536, bf537, bf538, bf539, bf542, bf544, bf547, bf548, bf549, bf542m, bf544m, bf547m, bf548m, bf549m, bf561, bf592. The optional sirevision specifies the silicon revision of the target Blackfin processor. Any workarounds available for the targeted silicon revision are enabled. If sirevision is none, no workarounds are enabled. If sirevision is any, all workarounds for the targeted processor are enabled. The "__SILICON_REVISION__" macro is defined to two hexadecimal digits representing the major and minor numbers in the silicon revision. If sirevision is none, the "__SILICON_REVISION__" is not defined. If sirevision is any, the "__SILICON_REVISION__" is defined to be 0xffff. If this optional sirevision is not used, GCC assumes the latest known silicon revision of the targeted Blackfin processor. GCC defines a preprocessor macro for the specified cpu. For the bfin-elf toolchain, this option causes the hardware BSP provided by libgloss to be linked in if -msim is not given. Without this option, bf532 is used as the processor by default. Note that support for bf561 is incomplete. For bf561, only the preprocessor macro is defined. -msim Specifies that the program will be run on the simulator. This causes the simulator BSP provided by libgloss to be linked in. This option has effect only for bfin-elf toolchain. Certain other options, such as -mid-shared-library and -mfdpic, imply -msim. -momit-leaf-frame-pointer Don't keep the frame pointer in a register for leaf functions. This avoids the instructions to save, set up and restore frame pointers and makes an extra register available in leaf functions. -mspecld-anomaly When enabled, the compiler ensures that the generated code does not contain speculative loads after jump instructions. If this option is used, "__WORKAROUND_SPECULATIVE_LOADS" is defined. -mno-specld-anomaly Don't generate extra code to prevent speculative loads from occurring. -mcsync-anomaly When enabled, the compiler ensures that the generated code does not contain CSYNC or SSYNC instructions too soon after conditional branches. If this option is used, "__WORKAROUND_SPECULATIVE_SYNCS" is defined. -mno-csync-anomaly Don't generate extra code to prevent CSYNC or SSYNC instructions from occurring too soon after a conditional branch. -mlow64k When enabled, the compiler is free to take advantage of the knowledge that the entire program fits into the low 64k of memory. -mno-low64k Assume that the program is arbitrarily large. This is the default. -mstack-check-l1 Do stack checking using information placed into L1 scratchpad memory by the uClinux kernel. -mid-shared-library Generate code that supports shared libraries via the library ID method. This allows for execute in place and shared libraries in an environment without virtual memory management. This option implies -fPIC. With a bfin-elf target, this option implies -msim. -mno-id-shared-library Generate code that doesn't assume ID-based shared libraries are being used. This is the default. -mleaf-id-shared-library Generate code that supports shared libraries via the library ID method, but assumes that this library or executable won't link against any other ID shared libraries. That allows the compiler to use faster code for jumps and calls. -mno-leaf-id-shared-library Do not assume that the code being compiled won't link against any ID shared libraries. Slower code is generated for jump and call insns. -mshared-library-id=n Specifies the identification number of the ID-based shared library being compiled. Specifying a value of 0 generates more compact code; specifying other values forces the allocation of that number to the current library but is no more space- or time-efficient than omitting this option. -msep-data Generate code that allows the data segment to be located in a different area of memory from the text segment. This allows for execute in place in an environment without virtual memory management by eliminating relocations against the text section. -mno-sep-data Generate code that assumes that the data segment follows the text segment. This is the default. -mlong-calls -mno-long-calls Tells the compiler to perform function calls by first loading the address of the function into a register and then performing a subroutine call on this register. This switch is needed if the target function lies outside of the 24-bit addressing range of the offset-based version of subroutine call instruction. This feature is not enabled by default. Specifying -mno-long-calls restores the default behavior. Note these switches have no effect on how the compiler generates code to handle function calls via function pointers. -mfast-fp Link with the fast floating-point library. This library relaxes some of the IEEE floating-point standard's rules for checking inputs against Not-a-Number (NAN), in the interest of performance. -minline-plt Enable inlining of PLT entries in function calls to functions that are not known to bind locally. It has no effect without -mfdpic. -mmulticore Build a standalone application for multicore Blackfin processors. This option causes proper start files and link scripts supporting multicore to be used, and defines the macro "__BFIN_MULTICORE". It can only be used with -mcpu=bf561[-sirevision]. This option can be used with -mcorea or -mcoreb, which selects the one-application-per-core programming model. Without -mcorea or -mcoreb, the single-application/dual-core programming model is used. In this model, the main function of Core B should be named as "coreb_main". If this option is not used, the single-core application programming model is used. -mcorea Build a standalone application for Core A of BF561 when using the one-application-per-core programming model. Proper start files and link scripts are used to support Core A, and the macro "__BFIN_COREA" is defined. This option can only be used in conjunction with -mmulticore. -mcoreb Build a standalone application for Core B of BF561 when using the one-application-per-core programming model. Proper start files and link scripts are used to support Core B, and the macro "__BFIN_COREB" is defined. When this option is used, "coreb_main" should be used instead of "main". This option can only be used in conjunction with -mmulticore. -msdram Build a standalone application for SDRAM. Proper start files and link scripts are used to put the application into SDRAM, and the macro "__BFIN_SDRAM" is defined. The loader should initialize SDRAM before loading the application. -micplb Assume that ICPLBs are enabled at run time. This has an effect on certain anomaly workarounds. For Linux targets, the default is to assume ICPLBs are enabled; for standalone applications the default is off. C6X Options -march=name This specifies the name of the target architecture. GCC uses this name to determine what kind of instructions it can emit when generating assembly code. Permissible names are: c62x, c64x, c64x+, c67x, c67x+, c674x. -mbig-endian Generate code for a big-endian target. -mlittle-endian Generate code for a little-endian target. This is the default. -msim Choose startup files and linker script suitable for the simulator. -msdata=default Put small global and static data in the ".neardata" section, which is pointed to by register "B14". Put small uninitialized global and static data in the ".bss" section, which is adjacent to the ".neardata" section. Put small read-only data into the ".rodata" section. The corresponding sections used for large pieces of data are ".fardata", ".far" and ".const". -msdata=all Put all data, not just small objects, into the sections reserved for small data, and use addressing relative to the "B14" register to access them. -msdata=none Make no use of the sections reserved for small data, and use absolute addresses to access all data. Put all initialized global and static data in the ".fardata" section, and all uninitialized data in the ".far" section. Put all constant data into the ".const" section. CRIS Options These options are defined specifically for the CRIS ports. -march=architecture-type -mcpu=architecture-type Generate code for the specified architecture. The choices for architecture-type are v3, v8 and v10 for respectively ETRAX 4, ETRAX 100, and ETRAX 100 LX. Default is v0 except for cris-axis-linux-gnu, where the default is v10. -mtune=architecture-type Tune to architecture-type everything applicable about the generated code, except for the ABI and the set of available instructions. The choices for architecture-type are the same as for -march=architecture-type. -mmax-stack-frame=n Warn when the stack frame of a function exceeds n bytes. -metrax4 -metrax100 The options -metrax4 and -metrax100 are synonyms for -march=v3 and -march=v8 respectively. -mmul-bug-workaround -mno-mul-bug-workaround Work around a bug in the "muls" and "mulu" instructions for CPU models where it applies. This option is active by default. -mpdebug Enable CRIS-specific verbose debug-related information in the assembly code. This option also has the effect of turning off the #NO_APP formatted-code indicator to the assembler at the beginning of the assembly file. -mcc-init Do not use condition-code results from previous instruction; always emit compare and test instructions before use of condition codes. -mno-side-effects Do not emit instructions with side effects in addressing modes other than post-increment. -mstack-align -mno-stack-align -mdata-align -mno-data-align -mconst-align -mno-const-align These options (no- options) arrange (eliminate arrangements) for the stack frame, individual data and constants to be aligned for the maximum single data access size for the chosen CPU model. The default is to arrange for 32-bit alignment. ABI details such as structure layout are not affected by these options. -m32-bit -m16-bit -m8-bit Similar to the stack- data- and const-align options above, these options arrange for stack frame, writable data and constants to all be 32-bit, 16-bit or 8-bit aligned. The default is 32-bit alignment. -mno-prologue-epilogue -mprologue-epilogue With -mno-prologue-epilogue, the normal function prologue and epilogue which set up the stack frame are omitted and no return instructions or return sequences are generated in the code. Use this option only together with visual inspection of the compiled code: no warnings or errors are generated when call-saved registers must be saved, or storage for local variables needs to be allocated. -mno-gotplt -mgotplt With -fpic and -fPIC, don't generate (do generate) instruction sequences that load addresses for functions from the PLT part of the GOT rather than (traditional on other architectures) calls to the PLT. The default is -mgotplt. -melf Legacy no-op option only recognized with the cris-axis-elf and cris-axis-linux-gnu targets. -mlinux Legacy no-op option only recognized with the cris-axis-linux- gnu target. -sim This option, recognized for the cris-axis-elf, arranges to link with input-output functions from a simulator library. Code, initialized data and zero-initialized data are allocated consecutively. -sim2 Like -sim, but pass linker options to locate initialized data at 0x40000000 and zero-initialized data at 0x80000000. CR16 Options These options are defined specifically for the CR16 ports. -mmac Enable the use of multiply-accumulate instructions. Disabled by default. -mcr16cplus -mcr16c Generate code for CR16C or CR16C+ architecture. CR16C+ architecture is default. -msim Links the library libsim.a which is in compatible with simulator. Applicable to ELF compiler only. -mint32 Choose integer type as 32-bit wide. -mbit-ops Generates "sbit"/"cbit" instructions for bit manipulations. -mdata-model=model Choose a data model. The choices for model are near, far or medium. medium is default. However, far is not valid with -mcr16c, as the CR16C architecture does not support the far data model. C-SKY Options GCC supports these options when compiling for C-SKY V2 processors. -march=arch Specify the C-SKY target architecture. Valid values for arch are: ck801, ck802, ck803, ck807, and ck810. The default is ck810. -mcpu=cpu Specify the C-SKY target processor. Valid values for cpu are: ck801, ck801t, ck802, ck802t, ck802j, ck803, ck803h, ck803t, ck803ht, ck803f, ck803fh, ck803e, ck803eh, ck803et, ck803eht, ck803ef, ck803efh, ck803ft, ck803eft, ck803efht, ck803r1, ck803hr1, ck803tr1, ck803htr1, ck803fr1, ck803fhr1, ck803er1, ck803ehr1, ck803etr1, ck803ehtr1, ck803efr1, ck803efhr1, ck803ftr1, ck803eftr1, ck803efhtr1, ck803s, ck803st, ck803se, ck803sf, ck803sef, ck803seft, ck807e, ck807ef, ck807, ck807f, ck810e, ck810et, ck810ef, ck810eft, ck810, ck810v, ck810f, ck810t, ck810fv, ck810tv, ck810ft, and ck810ftv. -mbig-endian -EB -mlittle-endian -EL Select big- or little-endian code. The default is little- endian. -mhard-float -msoft-float Select hardware or software floating-point implementations. The default is soft float. -mdouble-float -mno-double-float When -mhard-float is in effect, enable generation of double- precision float instructions. This is the default except when compiling for CK803. -mfdivdu -mno-fdivdu When -mhard-float is in effect, enable generation of "frecipd", "fsqrtd", and "fdivd" instructions. This is the default except when compiling for CK803. -mfpu=fpu Select the floating-point processor. This option can only be used with -mhard-float. Values for fpu are fpv2_sf (equivalent to -mno-double-float -mno-fdivdu), fpv2 (-mdouble-float -mno-divdu), and fpv2_divd (-mdouble-float -mdivdu). -melrw -mno-elrw Enable the extended "lrw" instruction. This option defaults to on for CK801 and off otherwise. -mistack -mno-istack Enable interrupt stack instructions; the default is off. The -mistack option is required to handle the "interrupt" and "isr" function attributes. -mmp Enable multiprocessor instructions; the default is off. -mcp Enable coprocessor instructions; the default is off. -mcache Enable coprocessor instructions; the default is off. -msecurity Enable C-SKY security instructions; the default is off. -mtrust Enable C-SKY trust instructions; the default is off. -mdsp -medsp -mvdsp Enable C-SKY DSP, Enhanced DSP, or Vector DSP instructions, respectively. All of these options default to off. -mdiv -mno-div Generate divide instructions. Default is off. -msmart -mno-smart Generate code for Smart Mode, using only registers numbered 0-7 to allow use of 16-bit instructions. This option is ignored for CK801 where this is the required behavior, and it defaults to on for CK802. For other targets, the default is off. -mhigh-registers -mno-high-registers Generate code using the high registers numbered 16-31. This option is not supported on CK801, CK802, or CK803, and is enabled by default for other processors. -manchor -mno-anchor Generate code using global anchor symbol addresses. -mpushpop -mno-pushpop Generate code using "push" and "pop" instructions. This option defaults to on. -mmultiple-stld -mstm -mno-multiple-stld -mno-stm Generate code using "stm" and "ldm" instructions. This option isn't supported on CK801 but is enabled by default on other processors. -mconstpool -mno-constpool Create constant pools in the compiler instead of deferring it to the assembler. This option is the default and required for correct code generation on CK801 and CK802, and is optional on other processors. -mstack-size -mno-stack-size Emit ".stack_size" directives for each function in the assembly output. This option defaults to off. -mccrt -mno-ccrt Generate code for the C-SKY compiler runtime instead of libgcc. This option defaults to off. -mbranch-cost=n Set the branch costs to roughly "n" instructions. The default is 1. -msched-prolog -mno-sched-prolog Permit scheduling of function prologue and epilogue sequences. Using this option can result in code that is not compliant with the C-SKY V2 ABI prologue requirements and that cannot be debugged or backtraced. It is disabled by default. Darwin Options These options are defined for all architectures running the Darwin operating system. FSF GCC on Darwin does not create "fat" object files; it creates an object file for the single architecture that GCC was built to target. Apple's GCC on Darwin does create "fat" files if multiple -arch options are used; it does so by running the compiler or linker multiple times and joining the results together with lipo. The subtype of the file created (like ppc7400 or ppc970 or i686) is determined by the flags that specify the ISA that GCC is targeting, like -mcpu or -march. The -force_cpusubtype_ALL option can be used to override this. The Darwin tools vary in their behavior when presented with an ISA mismatch. The assembler, as, only permits instructions to be used that are valid for the subtype of the file it is generating, so you cannot put 64-bit instructions in a ppc750 object file. The linker for shared libraries, /usr/bin/libtool, fails and prints an error if asked to create a shared library with a less restrictive subtype than its input files (for instance, trying to put a ppc970 object file in a ppc7400 library). The linker for executables, ld, quietly gives the executable the most restrictive subtype of any of its input files. -Fdir Add the framework directory dir to the head of the list of directories to be searched for header files. These directories are interleaved with those specified by -I options and are scanned in a left-to-right order. A framework directory is a directory with frameworks in it. A framework is a directory with a Headers and/or PrivateHeaders directory contained directly in it that ends in .framework. The name of a framework is the name of this directory excluding the .framework. Headers associated with the framework are found in one of those two directories, with Headers being searched first. A subframework is a framework directory that is in a framework's Frameworks directory. Includes of subframework headers can only appear in a header of a framework that contains the subframework, or in a sibling subframework header. Two subframeworks are siblings if they occur in the same framework. A subframework should not have the same name as a framework; a warning is issued if this is violated. Currently a subframework cannot have subframeworks; in the future, the mechanism may be extended to support this. The standard frameworks can be found in /System/Library/Frameworks and /Library/Frameworks. An example include looks like "#include <Framework/header.h>", where Framework denotes the name of the framework and header.h is found in the PrivateHeaders or Headers directory. -iframeworkdir Like -F except the directory is a treated as a system directory. The main difference between this -iframework and -F is that with -iframework the compiler does not warn about constructs contained within header files found via dir. This option is valid only for the C family of languages. -gused Emit debugging information for symbols that are used. For stabs debugging format, this enables -feliminate-unused-debug-symbols. This is by default ON. -gfull Emit debugging information for all symbols and types. -mmacosx-version-min=version The earliest version of MacOS X that this executable will run on is version. Typical values of version include 10.1, 10.2, and 10.3.9. If the compiler was built to use the system's headers by default, then the default for this option is the system version on which the compiler is running, otherwise the default is to make choices that are compatible with as many systems and code bases as possible. -mkernel Enable kernel development mode. The -mkernel option sets -static, -fno-common, -fno-use-cxa-atexit, -fno-exceptions, -fno-non-call-exceptions, -fapple-kext, -fno-weak and -fno-rtti where applicable. This mode also sets -mno-altivec, -msoft-float, -fno-builtin and -mlong-branch for PowerPC targets. -mone-byte-bool Override the defaults for "bool" so that "sizeof(bool)==1". By default "sizeof(bool)" is 4 when compiling for Darwin/PowerPC and 1 when compiling for Darwin/x86, so this option has no effect on x86. Warning: The -mone-byte-bool switch causes GCC to generate code that is not binary compatible with code generated without that switch. Using this switch may require recompiling all other modules in a program, including system libraries. Use this switch to conform to a non-default data model. -mfix-and-continue -ffix-and-continue -findirect-data Generate code suitable for fast turnaround development, such as to allow GDB to dynamically load .o files into already- running programs. -findirect-data and -ffix-and-continue are provided for backwards compatibility. -all_load Loads all members of static archive libraries. See man ld(1) for more information. -arch_errors_fatal Cause the errors having to do with files that have the wrong architecture to be fatal. -bind_at_load Causes the output file to be marked such that the dynamic linker will bind all undefined references when the file is loaded or launched. -bundle Produce a Mach-o bundle format file. See man ld(1) for more information. -bundle_loader executable This option specifies the executable that will load the build output file being linked. See man ld(1) for more information. -dynamiclib When passed this option, GCC produces a dynamic library instead of an executable when linking, using the Darwin libtool command. -force_cpusubtype_ALL This causes GCC's output file to have the ALL subtype, instead of one controlled by the -mcpu or -march option. -allowable_client client_name -client_name -compatibility_version -current_version -dead_strip -dependency-file -dylib_file -dylinker_install_name -dynamic -exported_symbols_list -filelist -flat_namespace -force_flat_namespace -headerpad_max_install_names -image_base -init -install_name -keep_private_externs -multi_module -multiply_defined -multiply_defined_unused -noall_load -no_dead_strip_inits_and_terms -nofixprebinding -nomultidefs -noprebind -noseglinkedit -pagezero_size -prebind -prebind_all_twolevel_modules -private_bundle -read_only_relocs -sectalign -sectobjectsymbols -whyload -seg1addr -sectcreate -sectobjectsymbols -sectorder -segaddr -segs_read_only_addr -segs_read_write_addr -seg_addr_table -seg_addr_table_filename -seglinkedit -segprot -segs_read_only_addr -segs_read_write_addr -single_module -static -sub_library -sub_umbrella -twolevel_namespace -umbrella -undefined -unexported_symbols_list -weak_reference_mismatches -whatsloaded These options are passed to the Darwin linker. The Darwin linker man page describes them in detail. DEC Alpha Options These -m options are defined for the DEC Alpha implementations: -mno-soft-float -msoft-float Use (do not use) the hardware floating-point instructions for floating-point operations. When -msoft-float is specified, functions in libgcc.a are used to perform floating-point operations. Unless they are replaced by routines that emulate the floating-point operations, or compiled in such a way as to call such emulations routines, these routines issue floating-point operations. If you are compiling for an Alpha without floating-point operations, you must ensure that the library is built so as not to call them. Note that Alpha implementations without floating-point operations are required to have floating-point registers. -mfp-reg -mno-fp-regs Generate code that uses (does not use) the floating-point register set. -mno-fp-regs implies -msoft-float. If the floating-point register set is not used, floating-point operands are passed in integer registers as if they were integers and floating-point results are passed in $0 instead of $f0. This is a non-standard calling sequence, so any function with a floating-point argument or return value called by code compiled with -mno-fp-regs must also be compiled with that option. A typical use of this option is building a kernel that does not use, and hence need not save and restore, any floating- point registers. -mieee The Alpha architecture implements floating-point hardware optimized for maximum performance. It is mostly compliant with the IEEE floating-point standard. However, for full compliance, software assistance is required. This option generates code fully IEEE-compliant code except that the inexact-flag is not maintained (see below). If this option is turned on, the preprocessor macro "_IEEE_FP" is defined during compilation. The resulting code is less efficient but is able to correctly support denormalized numbers and exceptional IEEE values such as not-a-number and plus/minus infinity. Other Alpha compilers call this option -ieee_with_no_inexact. -mieee-with-inexact This is like -mieee except the generated code also maintains the IEEE inexact-flag. Turning on this option causes the generated code to implement fully-compliant IEEE math. In addition to "_IEEE_FP", "_IEEE_FP_EXACT" is defined as a preprocessor macro. On some Alpha implementations the resulting code may execute significantly slower than the code generated by default. Since there is very little code that depends on the inexact-flag, you should normally not specify this option. Other Alpha compilers call this option -ieee_with_inexact. -mfp-trap-mode=trap-mode This option controls what floating-point related traps are enabled. Other Alpha compilers call this option -fptm trap- mode. The trap mode can be set to one of four values: n This is the default (normal) setting. The only traps that are enabled are the ones that cannot be disabled in software (e.g., division by zero trap). u In addition to the traps enabled by n, underflow traps are enabled as well. su Like u, but the instructions are marked to be safe for software completion (see Alpha architecture manual for details). sui Like su, but inexact traps are enabled as well. -mfp-rounding-mode=rounding-mode Selects the IEEE rounding mode. Other Alpha compilers call this option -fprm rounding-mode. The rounding-mode can be one of: n Normal IEEE rounding mode. Floating-point numbers are rounded towards the nearest machine number or towards the even machine number in case of a tie. m Round towards minus infinity. c Chopped rounding mode. Floating-point numbers are rounded towards zero. d Dynamic rounding mode. A field in the floating-point control register (fpcr, see Alpha architecture reference manual) controls the rounding mode in effect. The C library initializes this register for rounding towards plus infinity. Thus, unless your program modifies the fpcr, d corresponds to round towards plus infinity. -mtrap-precision=trap-precision In the Alpha architecture, floating-point traps are imprecise. This means without software assistance it is impossible to recover from a floating trap and program execution normally needs to be terminated. GCC can generate code that can assist operating system trap handlers in determining the exact location that caused a floating-point trap. Depending on the requirements of an application, different levels of precisions can be selected: p Program precision. This option is the default and means a trap handler can only identify which program caused a floating-point exception. f Function precision. The trap handler can determine the function that caused a floating-point exception. i Instruction precision. The trap handler can determine the exact instruction that caused a floating-point exception. Other Alpha compilers provide the equivalent options called -scope_safe and -resumption_safe. -mieee-conformant This option marks the generated code as IEEE conformant. You must not use this option unless you also specify -mtrap-precision=i and either -mfp-trap-mode=su or -mfp-trap-mode=sui. Its only effect is to emit the line .eflag 48 in the function prologue of the generated assembly file. -mbuild-constants Normally GCC examines a 32- or 64-bit integer constant to see if it can construct it from smaller constants in two or three instructions. If it cannot, it outputs the constant as a literal and generates code to load it from the data segment at run time. Use this option to require GCC to construct all integer constants using code, even if it takes more instructions (the maximum is six). You typically use this option to build a shared library dynamic loader. Itself a shared library, it must relocate itself in memory before it can find the variables and constants in its own data segment. -mbwx -mno-bwx -mcix -mno-cix -mfix -mno-fix -mmax -mno-max Indicate whether GCC should generate code to use the optional BWX, CIX, FIX and MAX instruction sets. The default is to use the instruction sets supported by the CPU type specified via -mcpu= option or that of the CPU on which GCC was built if none is specified. -mfloat-vax -mfloat-ieee Generate code that uses (does not use) VAX F and G floating- point arithmetic instead of IEEE single and double precision. -mexplicit-relocs -mno-explicit-relocs Older Alpha assemblers provided no way to generate symbol relocations except via assembler macros. Use of these macros does not allow optimal instruction scheduling. GNU binutils as of version 2.12 supports a new syntax that allows the compiler to explicitly mark which relocations should apply to which instructions. This option is mostly useful for debugging, as GCC detects the capabilities of the assembler when it is built and sets the default accordingly. -msmall-data -mlarge-data When -mexplicit-relocs is in effect, static data is accessed via gp-relative relocations. When -msmall-data is used, objects 8 bytes long or smaller are placed in a small data area (the ".sdata" and ".sbss" sections) and are accessed via 16-bit relocations off of the $gp register. This limits the size of the small data area to 64KB, but allows the variables to be directly accessed via a single instruction. The default is -mlarge-data. With this option the data area is limited to just below 2GB. Programs that require more than 2GB of data must use "malloc" or "mmap" to allocate the data in the heap instead of in the program's data segment. When generating code for shared libraries, -fpic implies -msmall-data and -fPIC implies -mlarge-data. -msmall-text -mlarge-text When -msmall-text is used, the compiler assumes that the code of the entire program (or shared library) fits in 4MB, and is thus reachable with a branch instruction. When -msmall-data is used, the compiler can assume that all local symbols share the same $gp value, and thus reduce the number of instructions required for a function call from 4 to 1. The default is -mlarge-text. -mcpu=cpu_type Set the instruction set and instruction scheduling parameters for machine type cpu_type. You can specify either the EV style name or the corresponding chip number. GCC supports scheduling parameters for the EV4, EV5 and EV6 family of processors and chooses the default values for the instruction set from the processor you specify. If you do not specify a processor type, GCC defaults to the processor on which the compiler was built. Supported values for cpu_type are ev4 ev45 21064 Schedules as an EV4 and has no instruction set extensions. ev5 21164 Schedules as an EV5 and has no instruction set extensions. ev56 21164a Schedules as an EV5 and supports the BWX extension. pca56 21164pc 21164PC Schedules as an EV5 and supports the BWX and MAX extensions. ev6 21264 Schedules as an EV6 and supports the BWX, FIX, and MAX extensions. ev67 21264a Schedules as an EV6 and supports the BWX, CIX, FIX, and MAX extensions. Native toolchains also support the value native, which selects the best architecture option for the host processor. -mcpu=native has no effect if GCC does not recognize the processor. -mtune=cpu_type Set only the instruction scheduling parameters for machine type cpu_type. The instruction set is not changed. Native toolchains also support the value native, which selects the best architecture option for the host processor. -mtune=native has no effect if GCC does not recognize the processor. -mmemory-latency=time Sets the latency the scheduler should assume for typical memory references as seen by the application. This number is highly dependent on the memory access patterns used by the application and the size of the external cache on the machine. Valid options for time are number A decimal number representing clock cycles. L1 L2 L3 main The compiler contains estimates of the number of clock cycles for "typical" EV4 & EV5 hardware for the Level 1, 2 & 3 caches (also called Dcache, Scache, and Bcache), as well as to main memory. Note that L3 is only valid for EV5. FR30 Options These options are defined specifically for the FR30 port. -msmall-model Use the small address space model. This can produce smaller code, but it does assume that all symbolic values and addresses fit into a 20-bit range. -mno-lsim Assume that runtime support has been provided and so there is no need to include the simulator library (libsim.a) on the linker command line. FT32 Options These options are defined specifically for the FT32 port. -msim Specifies that the program will be run on the simulator. This causes an alternate runtime startup and library to be linked. You must not use this option when generating programs that will run on real hardware; you must provide your own runtime library for whatever I/O functions are needed. -mlra Enable Local Register Allocation. This is still experimental for FT32, so by default the compiler uses standard reload. -mnodiv Do not use div and mod instructions. -mft32b Enable use of the extended instructions of the FT32B processor. -mcompress Compress all code using the Ft32B code compression scheme. -mnopm Do not generate code that reads program memory. FRV Options -mgpr-32 Only use the first 32 general-purpose registers. -mgpr-64 Use all 64 general-purpose registers. -mfpr-32 Use only the first 32 floating-point registers. -mfpr-64 Use all 64 floating-point registers. -mhard-float Use hardware instructions for floating-point operations. -msoft-float Use library routines for floating-point operations. -malloc-cc Dynamically allocate condition code registers. -mfixed-cc Do not try to dynamically allocate condition code registers, only use "icc0" and "fcc0". -mdword Change ABI to use double word insns. -mno-dword Do not use double word instructions. -mdouble Use floating-point double instructions. -mno-double Do not use floating-point double instructions. -mmedia Use media instructions. -mno-media Do not use media instructions. -mmuladd Use multiply and add/subtract instructions. -mno-muladd Do not use multiply and add/subtract instructions. -mfdpic Select the FDPIC ABI, which uses function descriptors to represent pointers to functions. Without any PIC/PIE-related options, it implies -fPIE. With -fpic or -fpie, it assumes GOT entries and small data are within a 12-bit range from the GOT base address; with -fPIC or -fPIE, GOT offsets are computed with 32 bits. With a bfin-elf target, this option implies -msim. -minline-plt Enable inlining of PLT entries in function calls to functions that are not known to bind locally. It has no effect without -mfdpic. It's enabled by default if optimizing for speed and compiling for shared libraries (i.e., -fPIC or -fpic), or when an optimization option such as -O3 or above is present in the command line. -mTLS Assume a large TLS segment when generating thread-local code. -mtls Do not assume a large TLS segment when generating thread- local code. -mgprel-ro Enable the use of "GPREL" relocations in the FDPIC ABI for data that is known to be in read-only sections. It's enabled by default, except for -fpic or -fpie: even though it may help make the global offset table smaller, it trades 1 instruction for 4. With -fPIC or -fPIE, it trades 3 instructions for 4, one of which may be shared by multiple symbols, and it avoids the need for a GOT entry for the referenced symbol, so it's more likely to be a win. If it is not, -mno-gprel-ro can be used to disable it. -multilib-library-pic Link with the (library, not FD) pic libraries. It's implied by -mlibrary-pic, as well as by -fPIC and -fpic without -mfdpic. You should never have to use it explicitly. -mlinked-fp Follow the EABI requirement of always creating a frame pointer whenever a stack frame is allocated. This option is enabled by default and can be disabled with -mno-linked-fp. -mlong-calls Use indirect addressing to call functions outside the current compilation unit. This allows the functions to be placed anywhere within the 32-bit address space. -malign-labels Try to align labels to an 8-byte boundary by inserting NOPs into the previous packet. This option only has an effect when VLIW packing is enabled. It doesn't create new packets; it merely adds NOPs to existing ones. -mlibrary-pic Generate position-independent EABI code. -macc-4 Use only the first four media accumulator registers. -macc-8 Use all eight media accumulator registers. -mpack Pack VLIW instructions. -mno-pack Do not pack VLIW instructions. -mno-eflags Do not mark ABI switches in e_flags. -mcond-move Enable the use of conditional-move instructions (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-cond-move Disable the use of conditional-move instructions. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mscc Enable the use of conditional set instructions (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-scc Disable the use of conditional set instructions. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mcond-exec Enable the use of conditional execution (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-cond-exec Disable the use of conditional execution. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mvliw-branch Run a pass to pack branches into VLIW instructions (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-vliw-branch Do not run a pass to pack branches into VLIW instructions. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mmulti-cond-exec Enable optimization of "&&" and "||" in conditional execution (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-multi-cond-exec Disable optimization of "&&" and "||" in conditional execution. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mnested-cond-exec Enable nested conditional execution optimizations (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-nested-cond-exec Disable nested conditional execution optimizations. This switch is mainly for debugging the compiler and will likely be removed in a future version. -moptimize-membar This switch removes redundant "membar" instructions from the compiler-generated code. It is enabled by default. -mno-optimize-membar This switch disables the automatic removal of redundant "membar" instructions from the generated code. -mtomcat-stats Cause gas to print out tomcat statistics. -mcpu=cpu Select the processor type for which to generate code. Possible values are frv, fr550, tomcat, fr500, fr450, fr405, fr400, fr300 and simple. GNU/Linux Options These -m options are defined for GNU/Linux targets: -mglibc Use the GNU C library. This is the default except on *-*-linux-*uclibc*, *-*-linux-*musl* and *-*-linux-*android* targets. -muclibc Use uClibc C library. This is the default on *-*-linux-*uclibc* targets. -mmusl Use the musl C library. This is the default on *-*-linux-*musl* targets. -mbionic Use Bionic C library. This is the default on *-*-linux-*android* targets. -mandroid Compile code compatible with Android platform. This is the default on *-*-linux-*android* targets. When compiling, this option enables -mbionic, -fPIC, -fno-exceptions and -fno-rtti by default. When linking, this option makes the GCC driver pass Android-specific options to the linker. Finally, this option causes the preprocessor macro "__ANDROID__" to be defined. -tno-android-cc Disable compilation effects of -mandroid, i.e., do not enable -mbionic, -fPIC, -fno-exceptions and -fno-rtti by default. -tno-android-ld Disable linking effects of -mandroid, i.e., pass standard Linux linking options to the linker. H8/300 Options These -m options are defined for the H8/300 implementations: -mrelax Shorten some address references at link time, when possible; uses the linker option -relax. -mh Generate code for the H8/300H. -ms Generate code for the H8S. -mn Generate code for the H8S and H8/300H in the normal mode. This switch must be used either with -mh or -ms. -ms2600 Generate code for the H8S/2600. This switch must be used with -ms. -mexr Extended registers are stored on stack before execution of function with monitor attribute. Default option is -mexr. This option is valid only for H8S targets. -mno-exr Extended registers are not stored on stack before execution of function with monitor attribute. Default option is -mno-exr. This option is valid only for H8S targets. -mint32 Make "int" data 32 bits by default. -malign-300 On the H8/300H and H8S, use the same alignment rules as for the H8/300. The default for the H8/300H and H8S is to align longs and floats on 4-byte boundaries. -malign-300 causes them to be aligned on 2-byte boundaries. This option has no effect on the H8/300. HPPA Options These -m options are defined for the HPPA family of computers: -march=architecture-type Generate code for the specified architecture. The choices for architecture-type are 1.0 for PA 1.0, 1.1 for PA 1.1, and 2.0 for PA 2.0 processors. Refer to /usr/lib/sched.models on an HP-UX system to determine the proper architecture option for your machine. Code compiled for lower numbered architectures runs on higher numbered architectures, but not the other way around. -mpa-risc-1-0 -mpa-risc-1-1 -mpa-risc-2-0 Synonyms for -march=1.0, -march=1.1, and -march=2.0 respectively. -mcaller-copies The caller copies function arguments passed by hidden reference. This option should be used with care as it is not compatible with the default 32-bit runtime. However, only aggregates larger than eight bytes are passed by hidden reference and the option provides better compatibility with OpenMP. -mjump-in-delay This option is ignored and provided for compatibility purposes only. -mdisable-fpregs Prevent floating-point registers from being used in any manner. This is necessary for compiling kernels that perform lazy context switching of floating-point registers. If you use this option and attempt to perform floating-point operations, the compiler aborts. -mdisable-indexing Prevent the compiler from using indexing address modes. This avoids some rather obscure problems when compiling MIG generated code under MACH. -mno-space-regs Generate code that assumes the target has no space registers. This allows GCC to generate faster indirect calls and use unscaled index address modes. Such code is suitable for level 0 PA systems and kernels. -mfast-indirect-calls Generate code that assumes calls never cross space boundaries. This allows GCC to emit code that performs faster indirect calls. This option does not work in the presence of shared libraries or nested functions. -mfixed-range=register-range Generate code treating the given register range as fixed registers. A fixed register is one that the register allocator cannot use. This is useful when compiling kernel code. A register range is specified as two registers separated by a dash. Multiple register ranges can be specified separated by a comma. -mlong-load-store Generate 3-instruction load and store sequences as sometimes required by the HP-UX 10 linker. This is equivalent to the +k option to the HP compilers. -mportable-runtime Use the portable calling conventions proposed by HP for ELF systems. -mgas Enable the use of assembler directives only GAS understands. -mschedule=cpu-type Schedule code according to the constraints for the machine type cpu-type. The choices for cpu-type are 700 7100, 7100LC, 7200, 7300 and 8000. Refer to /usr/lib/sched.models on an HP-UX system to determine the proper scheduling option for your machine. The default scheduling is 8000. -mlinker-opt Enable the optimization pass in the HP-UX linker. Note this makes symbolic debugging impossible. It also triggers a bug in the HP-UX 8 and HP-UX 9 linkers in which they give bogus error messages when linking some programs. -msoft-float Generate output containing library calls for floating point. Warning: the requisite libraries are not available for all HPPA targets. Normally the facilities of the machine's usual C compiler are used, but this cannot be done directly in cross-compilation. You must make your own arrangements to provide suitable library functions for cross-compilation. -msoft-float changes the calling convention in the output file; therefore, it is only useful if you compile all of a program with this option. In particular, you need to compile libgcc.a, the library that comes with GCC, with -msoft-float in order for this to work. -msio Generate the predefine, "_SIO", for server IO. The default is -mwsio. This generates the predefines, "__hp9000s700", "__hp9000s700__" and "_WSIO", for workstation IO. These options are available under HP-UX and HI-UX. -mgnu-ld Use options specific to GNU ld. This passes -shared to ld when building a shared library. It is the default when GCC is configured, explicitly or implicitly, with the GNU linker. This option does not affect which ld is called; it only changes what parameters are passed to that ld. The ld that is called is determined by the --with-ld configure option, GCC's program search path, and finally by the user's PATH. The linker used by GCC can be printed using which `gcc -print-prog-name=ld`. This option is only available on the 64-bit HP-UX GCC, i.e. configured with hppa*64*-*-hpux*. -mhp-ld Use options specific to HP ld. This passes -b to ld when building a shared library and passes +Accept TypeMismatch to ld on all links. It is the default when GCC is configured, explicitly or implicitly, with the HP linker. This option does not affect which ld is called; it only changes what parameters are passed to that ld. The ld that is called is determined by the --with-ld configure option, GCC's program search path, and finally by the user's PATH. The linker used by GCC can be printed using which `gcc -print-prog-name=ld`. This option is only available on the 64-bit HP-UX GCC, i.e. configured with hppa*64*-*-hpux*. -mlong-calls Generate code that uses long call sequences. This ensures that a call is always able to reach linker generated stubs. The default is to generate long calls only when the distance from the call site to the beginning of the function or translation unit, as the case may be, exceeds a predefined limit set by the branch type being used. The limits for normal calls are 7,600,000 and 240,000 bytes, respectively for the PA 2.0 and PA 1.X architectures. Sibcalls are always limited at 240,000 bytes. Distances are measured from the beginning of functions when using the -ffunction-sections option, or when using the -mgas and -mno-portable-runtime options together under HP-UX with the SOM linker. It is normally not desirable to use this option as it degrades performance. However, it may be useful in large applications, particularly when partial linking is used to build the application. The types of long calls used depends on the capabilities of the assembler and linker, and the type of code being generated. The impact on systems that support long absolute calls, and long pic symbol-difference or pc-relative calls should be relatively small. However, an indirect call is used on 32-bit ELF systems in pic code and it is quite long. -munix=unix-std Generate compiler predefines and select a startfile for the specified UNIX standard. The choices for unix-std are 93, 95 and 98. 93 is supported on all HP-UX versions. 95 is available on HP-UX 10.10 and later. 98 is available on HP-UX 11.11 and later. The default values are 93 for HP-UX 10.00, 95 for HP-UX 10.10 though to 11.00, and 98 for HP-UX 11.11 and later. -munix=93 provides the same predefines as GCC 3.3 and 3.4. -munix=95 provides additional predefines for "XOPEN_UNIX" and "_XOPEN_SOURCE_EXTENDED", and the startfile unix95.o. -munix=98 provides additional predefines for "_XOPEN_UNIX", "_XOPEN_SOURCE_EXTENDED", "_INCLUDE__STDC_A1_SOURCE" and "_INCLUDE_XOPEN_SOURCE_500", and the startfile unix98.o. It is important to note that this option changes the interfaces for various library routines. It also affects the operational behavior of the C library. Thus, extreme care is needed in using this option. Library code that is intended to operate with more than one UNIX standard must test, set and restore the variable "__xpg4_extended_mask" as appropriate. Most GNU software doesn't provide this capability. -nolibdld Suppress the generation of link options to search libdld.sl when the -static option is specified on HP-UX 10 and later. -static The HP-UX implementation of setlocale in libc has a dependency on libdld.sl. There isn't an archive version of libdld.sl. Thus, when the -static option is specified, special link options are needed to resolve this dependency. On HP-UX 10 and later, the GCC driver adds the necessary options to link with libdld.sl when the -static option is specified. This causes the resulting binary to be dynamic. On the 64-bit port, the linkers generate dynamic binaries by default in any case. The -nolibdld option can be used to prevent the GCC driver from adding these link options. -threads Add support for multithreading with the dce thread library under HP-UX. This option sets flags for both the preprocessor and linker. IA-64 Options These are the -m options defined for the Intel IA-64 architecture. -mbig-endian Generate code for a big-endian target. This is the default for HP-UX. -mlittle-endian Generate code for a little-endian target. This is the default for AIX5 and GNU/Linux. -mgnu-as -mno-gnu-as Generate (or don't) code for the GNU assembler. This is the default. -mgnu-ld -mno-gnu-ld Generate (or don't) code for the GNU linker. This is the default. -mno-pic Generate code that does not use a global pointer register. The result is not position independent code, and violates the IA-64 ABI. -mvolatile-asm-stop -mno-volatile-asm-stop Generate (or don't) a stop bit immediately before and after volatile asm statements. -mregister-names -mno-register-names Generate (or don't) in, loc, and out register names for the stacked registers. This may make assembler output more readable. -mno-sdata -msdata Disable (or enable) optimizations that use the small data section. This may be useful for working around optimizer bugs. -mconstant-gp Generate code that uses a single constant global pointer value. This is useful when compiling kernel code. -mauto-pic Generate code that is self-relocatable. This implies -mconstant-gp. This is useful when compiling firmware code. -minline-float-divide-min-latency Generate code for inline divides of floating-point values using the minimum latency algorithm. -minline-float-divide-max-throughput Generate code for inline divides of floating-point values using the maximum throughput algorithm. -mno-inline-float-divide Do not generate inline code for divides of floating-point values. -minline-int-divide-min-latency Generate code for inline divides of integer values using the minimum latency algorithm. -minline-int-divide-max-throughput Generate code for inline divides of integer values using the maximum throughput algorithm. -mno-inline-int-divide Do not generate inline code for divides of integer values. -minline-sqrt-min-latency Generate code for inline square roots using the minimum latency algorithm. -minline-sqrt-max-throughput Generate code for inline square roots using the maximum throughput algorithm. -mno-inline-sqrt Do not generate inline code for "sqrt". -mfused-madd -mno-fused-madd Do (don't) generate code that uses the fused multiply/add or multiply/subtract instructions. The default is to use these instructions. -mno-dwarf2-asm -mdwarf2-asm Don't (or do) generate assembler code for the DWARF line number debugging info. This may be useful when not using the GNU assembler. -mearly-stop-bits -mno-early-stop-bits Allow stop bits to be placed earlier than immediately preceding the instruction that triggered the stop bit. This can improve instruction scheduling, but does not always do so. -mfixed-range=register-range Generate code treating the given register range as fixed registers. A fixed register is one that the register allocator cannot use. This is useful when compiling kernel code. A register range is specified as two registers separated by a dash. Multiple register ranges can be specified separated by a comma. -mtls-size=tls-size Specify bit size of immediate TLS offsets. Valid values are 14, 22, and 64. -mtune=cpu-type Tune the instruction scheduling for a particular CPU, Valid values are itanium, itanium1, merced, itanium2, and mckinley. -milp32 -mlp64 Generate code for a 32-bit or 64-bit environment. The 32-bit environment sets int, long and pointer to 32 bits. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits. These are HP-UX specific flags. -mno-sched-br-data-spec -msched-br-data-spec (Dis/En)able data speculative scheduling before reload. This results in generation of "ld.a" instructions and the corresponding check instructions ("ld.c" / "chk.a"). The default setting is disabled. -msched-ar-data-spec -mno-sched-ar-data-spec (En/Dis)able data speculative scheduling after reload. This results in generation of "ld.a" instructions and the corresponding check instructions ("ld.c" / "chk.a"). The default setting is enabled. -mno-sched-control-spec -msched-control-spec (Dis/En)able control speculative scheduling. This feature is available only during region scheduling (i.e. before reload). This results in generation of the "ld.s" instructions and the corresponding check instructions "chk.s". The default setting is disabled. -msched-br-in-data-spec -mno-sched-br-in-data-spec (En/Dis)able speculative scheduling of the instructions that are dependent on the data speculative loads before reload. This is effective only with -msched-br-data-spec enabled. The default setting is enabled. -msched-ar-in-data-spec -mno-sched-ar-in-data-spec (En/Dis)able speculative scheduling of the instructions that are dependent on the data speculative loads after reload. This is effective only with -msched-ar-data-spec enabled. The default setting is enabled. -msched-in-control-spec -mno-sched-in-control-spec (En/Dis)able speculative scheduling of the instructions that are dependent on the control speculative loads. This is effective only with -msched-control-spec enabled. The default setting is enabled. -mno-sched-prefer-non-data-spec-insns -msched-prefer-non-data-spec-insns If enabled, data-speculative instructions are chosen for schedule only if there are no other choices at the moment. This makes the use of the data speculation much more conservative. The default setting is disabled. -mno-sched-prefer-non-control-spec-insns -msched-prefer-non-control-spec-insns If enabled, control-speculative instructions are chosen for schedule only if there are no other choices at the moment. This makes the use of the control speculation much more conservative. The default setting is disabled. -mno-sched-count-spec-in-critical-path -msched-count-spec-in-critical-path If enabled, speculative dependencies are considered during computation of the instructions priorities. This makes the use of the speculation a bit more conservative. The default setting is disabled. -msched-spec-ldc Use a simple data speculation check. This option is on by default. -msched-control-spec-ldc Use a simple check for control speculation. This option is on by default. -msched-stop-bits-after-every-cycle Place a stop bit after every cycle when scheduling. This option is on by default. -msched-fp-mem-deps-zero-cost Assume that floating-point stores and loads are not likely to cause a conflict when placed into the same instruction group. This option is disabled by default. -msel-sched-dont-check-control-spec Generate checks for control speculation in selective scheduling. This flag is disabled by default. -msched-max-memory-insns=max-insns Limit on the number of memory insns per instruction group, giving lower priority to subsequent memory insns attempting to schedule in the same instruction group. Frequently useful to prevent cache bank conflicts. The default value is 1. -msched-max-memory-insns-hard-limit Makes the limit specified by msched-max-memory-insns a hard limit, disallowing more than that number in an instruction group. Otherwise, the limit is "soft", meaning that non- memory operations are preferred when the limit is reached, but memory operations may still be scheduled. LM32 Options These -m options are defined for the LatticeMico32 architecture: -mbarrel-shift-enabled Enable barrel-shift instructions. -mdivide-enabled Enable divide and modulus instructions. -mmultiply-enabled Enable multiply instructions. -msign-extend-enabled Enable sign extend instructions. -muser-enabled Enable user-defined instructions. M32C Options -mcpu=name Select the CPU for which code is generated. name may be one of r8c for the R8C/Tiny series, m16c for the M16C (up to /60) series, m32cm for the M16C/80 series, or m32c for the M32C/80 series. -msim Specifies that the program will be run on the simulator. This causes an alternate runtime library to be linked in which supports, for example, file I/O. You must not use this option when generating programs that will run on real hardware; you must provide your own runtime library for whatever I/O functions are needed. -memregs=number Specifies the number of memory-based pseudo-registers GCC uses during code generation. These pseudo-registers are used like real registers, so there is a tradeoff between GCC's ability to fit the code into available registers, and the performance penalty of using memory instead of registers. Note that all modules in a program must be compiled with the same value for this option. Because of that, you must not use this option with GCC's default runtime libraries. M32R/D Options These -m options are defined for Renesas M32R/D architectures: -m32r2 Generate code for the M32R/2. -m32rx Generate code for the M32R/X. -m32r Generate code for the M32R. This is the default. -mmodel=small Assume all objects live in the lower 16MB of memory (so that their addresses can be loaded with the "ld24" instruction), and assume all subroutines are reachable with the "bl" instruction. This is the default. The addressability of a particular object can be set with the "model" attribute. -mmodel=medium Assume objects may be anywhere in the 32-bit address space (the compiler generates "seth/add3" instructions to load their addresses), and assume all subroutines are reachable with the "bl" instruction. -mmodel=large Assume objects may be anywhere in the 32-bit address space (the compiler generates "seth/add3" instructions to load their addresses), and assume subroutines may not be reachable with the "bl" instruction (the compiler generates the much slower "seth/add3/jl" instruction sequence). -msdata=none Disable use of the small data area. Variables are put into one of ".data", ".bss", or ".rodata" (unless the "section" attribute has been specified). This is the default. The small data area consists of sections ".sdata" and ".sbss". Objects may be explicitly put in the small data area with the "section" attribute using one of these sections. -msdata=sdata Put small global and static data in the small data area, but do not generate special code to reference them. -msdata=use Put small global and static data in the small data area, and generate special instructions to reference them. -G num Put global and static objects less than or equal to num bytes into the small data or BSS sections instead of the normal data or BSS sections. The default value of num is 8. The -msdata option must be set to one of sdata or use for this option to have any effect. All modules should be compiled with the same -G num value. Compiling with different values of num may or may not work; if it doesn't the linker gives an error message---incorrect code is not generated. -mdebug Makes the M32R-specific code in the compiler display some statistics that might help in debugging programs. -malign-loops Align all loops to a 32-byte boundary. -mno-align-loops Do not enforce a 32-byte alignment for loops. This is the default. -missue-rate=number Issue number instructions per cycle. number can only be 1 or 2. -mbranch-cost=number number can only be 1 or 2. If it is 1 then branches are preferred over conditional code, if it is 2, then the opposite applies. -mflush-trap=number Specifies the trap number to use to flush the cache. The default is 12. Valid numbers are between 0 and 15 inclusive. -mno-flush-trap Specifies that the cache cannot be flushed by using a trap. -mflush-func=name Specifies the name of the operating system function to call to flush the cache. The default is _flush_cache, but a function call is only used if a trap is not available. -mno-flush-func Indicates that there is no OS function for flushing the cache. M680x0 Options These are the -m options defined for M680x0 and ColdFire processors. The default settings depend on which architecture was selected when the compiler was configured; the defaults for the most common choices are given below. -march=arch Generate code for a specific M680x0 or ColdFire instruction set architecture. Permissible values of arch for M680x0 architectures are: 68000, 68010, 68020, 68030, 68040, 68060 and cpu32. ColdFire architectures are selected according to Freescale's ISA classification and the permissible values are: isaa, isaaplus, isab and isac. GCC defines a macro "__mcfarch__" whenever it is generating code for a ColdFire target. The arch in this macro is one of the -march arguments given above. When used together, -march and -mtune select code that runs on a family of similar processors but that is optimized for a particular microarchitecture. -mcpu=cpu Generate code for a specific M680x0 or ColdFire processor. The M680x0 cpus are: 68000, 68010, 68020, 68030, 68040, 68060, 68302, 68332 and cpu32. The ColdFire cpus are given by the table below, which also classifies the CPUs into families: Family : -mcpu arguments 51 : 51 51ac 51ag 51cn 51em 51je 51jf 51jg 51jm 51mm 51qe 51qm 5206 : 5202 5204 5206 5206e : 5206e 5208 : 5207 5208 5211a : 5210a 5211a 5213 : 5211 5212 5213 5216 : 5214 5216 52235 : 52230 52231 52232 52233 52234 52235 5225 : 5224 5225 52259 : 52252 52254 52255 52256 52258 52259 5235 : 5232 5233 5234 5235 523x 5249 : 5249 5250 : 5250 5271 : 5270 5271 5272 : 5272 5275 : 5274 5275 5282 : 5280 5281 5282 528x 53017 : 53011 53012 53013 53014 53015 53016 53017 5307 : 5307 5329 : 5327 5328 5329 532x 5373 : 5372 5373 537x 5407 : 5407 5475 : 5470 5471 5472 5473 5474 5475 547x 5480 5481 5482 5483 5484 5485 -mcpu=cpu overrides -march=arch if arch is compatible with cpu. Other combinations of -mcpu and -march are rejected. GCC defines the macro "__mcf_cpu_cpu" when ColdFire target cpu is selected. It also defines "__mcf_family_family", where the value of family is given by the table above. -mtune=tune Tune the code for a particular microarchitecture within the constraints set by -march and -mcpu. The M680x0 microarchitectures are: 68000, 68010, 68020, 68030, 68040, 68060 and cpu32. The ColdFire microarchitectures are: cfv1, cfv2, cfv3, cfv4 and cfv4e. You can also use -mtune=68020-40 for code that needs to run relatively well on 68020, 68030 and 68040 targets. -mtune=68020-60 is similar but includes 68060 targets as well. These two options select the same tuning decisions as -m68020-40 and -m68020-60 respectively. GCC defines the macros "__mcarch" and "__mcarch__" when tuning for 680x0 architecture arch. It also defines "mcarch" unless either -ansi or a non-GNU -std option is used. If GCC is tuning for a range of architectures, as selected by -mtune=68020-40 or -mtune=68020-60, it defines the macros for every architecture in the range. GCC also defines the macro "__muarch__" when tuning for ColdFire microarchitecture uarch, where uarch is one of the arguments given above. -m68000 -mc68000 Generate output for a 68000. This is the default when the compiler is configured for 68000-based systems. It is equivalent to -march=68000. Use this option for microcontrollers with a 68000 or EC000 core, including the 68008, 68302, 68306, 68307, 68322, 68328 and 68356. -m68010 Generate output for a 68010. This is the default when the compiler is configured for 68010-based systems. It is equivalent to -march=68010. -m68020 -mc68020 Generate output for a 68020. This is the default when the compiler is configured for 68020-based systems. It is equivalent to -march=68020. -m68030 Generate output for a 68030. This is the default when the compiler is configured for 68030-based systems. It is equivalent to -march=68030. -m68040 Generate output for a 68040. This is the default when the compiler is configured for 68040-based systems. It is equivalent to -march=68040. This option inhibits the use of 68881/68882 instructions that have to be emulated by software on the 68040. Use this option if your 68040 does not have code to emulate those instructions. -m68060 Generate output for a 68060. This is the default when the compiler is configured for 68060-based systems. It is equivalent to -march=68060. This option inhibits the use of 68020 and 68881/68882 instructions that have to be emulated by software on the 68060. Use this option if your 68060 does not have code to emulate those instructions. -mcpu32 Generate output for a CPU32. This is the default when the compiler is configured for CPU32-based systems. It is equivalent to -march=cpu32. Use this option for microcontrollers with a CPU32 or CPU32+ core, including the 68330, 68331, 68332, 68333, 68334, 68336, 68340, 68341, 68349 and 68360. -m5200 Generate output for a 520X ColdFire CPU. This is the default when the compiler is configured for 520X-based systems. It is equivalent to -mcpu=5206, and is now deprecated in favor of that option. Use this option for microcontroller with a 5200 core, including the MCF5202, MCF5203, MCF5204 and MCF5206. -m5206e Generate output for a 5206e ColdFire CPU. The option is now deprecated in favor of the equivalent -mcpu=5206e. -m528x Generate output for a member of the ColdFire 528X family. The option is now deprecated in favor of the equivalent -mcpu=528x. -m5307 Generate output for a ColdFire 5307 CPU. The option is now deprecated in favor of the equivalent -mcpu=5307. -m5407 Generate output for a ColdFire 5407 CPU. The option is now deprecated in favor of the equivalent -mcpu=5407. -mcfv4e Generate output for a ColdFire V4e family CPU (e.g. 547x/548x). This includes use of hardware floating-point instructions. The option is equivalent to -mcpu=547x, and is now deprecated in favor of that option. -m68020-40 Generate output for a 68040, without using any of the new instructions. This results in code that can run relatively efficiently on either a 68020/68881 or a 68030 or a 68040. The generated code does use the 68881 instructions that are emulated on the 68040. The option is equivalent to -march=68020 -mtune=68020-40. -m68020-60 Generate output for a 68060, without using any of the new instructions. This results in code that can run relatively efficiently on either a 68020/68881 or a 68030 or a 68040. The generated code does use the 68881 instructions that are emulated on the 68060. The option is equivalent to -march=68020 -mtune=68020-60. -mhard-float -m68881 Generate floating-point instructions. This is the default for 68020 and above, and for ColdFire devices that have an FPU. It defines the macro "__HAVE_68881__" on M680x0 targets and "__mcffpu__" on ColdFire targets. -msoft-float Do not generate floating-point instructions; use library calls instead. This is the default for 68000, 68010, and 68832 targets. It is also the default for ColdFire devices that have no FPU. -mdiv -mno-div Generate (do not generate) ColdFire hardware divide and remainder instructions. If -march is used without -mcpu, the default is "on" for ColdFire architectures and "off" for M680x0 architectures. Otherwise, the default is taken from the target CPU (either the default CPU, or the one specified by -mcpu). For example, the default is "off" for -mcpu=5206 and "on" for -mcpu=5206e. GCC defines the macro "__mcfhwdiv__" when this option is enabled. -mshort Consider type "int" to be 16 bits wide, like "short int". Additionally, parameters passed on the stack are also aligned to a 16-bit boundary even on targets whose API mandates promotion to 32-bit. -mno-short Do not consider type "int" to be 16 bits wide. This is the default. -mnobitfield -mno-bitfield Do not use the bit-field instructions. The -m68000, -mcpu32 and -m5200 options imply -mnobitfield. -mbitfield Do use the bit-field instructions. The -m68020 option implies -mbitfield. This is the default if you use a configuration designed for a 68020. -mrtd Use a different function-calling convention, in which functions that take a fixed number of arguments return with the "rtd" instruction, which pops their arguments while returning. This saves one instruction in the caller since there is no need to pop the arguments there. This calling convention is incompatible with the one normally used on Unix, so you cannot use it if you need to call libraries compiled with the Unix compiler. Also, you must provide function prototypes for all functions that take variable numbers of arguments (including "printf"); otherwise incorrect code is generated for calls to those functions. In addition, seriously incorrect code results if you call a function with too many arguments. (Normally, extra arguments are harmlessly ignored.) The "rtd" instruction is supported by the 68010, 68020, 68030, 68040, 68060 and CPU32 processors, but not by the 68000 or 5200. The default is -mno-rtd. -malign-int -mno-align-int Control whether GCC aligns "int", "long", "long long", "float", "double", and "long double" variables on a 32-bit boundary (-malign-int) or a 16-bit boundary (-mno-align-int). Aligning variables on 32-bit boundaries produces code that runs somewhat faster on processors with 32-bit busses at the expense of more memory. Warning: if you use the -malign-int switch, GCC aligns structures containing the above types differently than most published application binary interface specifications for the m68k. -mpcrel Use the pc-relative addressing mode of the 68000 directly, instead of using a global offset table. At present, this option implies -fpic, allowing at most a 16-bit offset for pc-relative addressing. -fPIC is not presently supported with -mpcrel, though this could be supported for 68020 and higher processors. -mno-strict-align -mstrict-align Do not (do) assume that unaligned memory references are handled by the system. -msep-data Generate code that allows the data segment to be located in a different area of memory from the text segment. This allows for execute-in-place in an environment without virtual memory management. This option implies -fPIC. -mno-sep-data Generate code that assumes that the data segment follows the text segment. This is the default. -mid-shared-library Generate code that supports shared libraries via the library ID method. This allows for execute-in-place and shared libraries in an environment without virtual memory management. This option implies -fPIC. -mno-id-shared-library Generate code that doesn't assume ID-based shared libraries are being used. This is the default. -mshared-library-id=n Specifies the identification number of the ID-based shared library being compiled. Specifying a value of 0 generates more compact code; specifying other values forces the allocation of that number to the current library, but is no more space- or time-efficient than omitting this option. -mxgot -mno-xgot When generating position-independent code for ColdFire, generate code that works if the GOT has more than 8192 entries. This code is larger and slower than code generated without this option. On M680x0 processors, this option is not needed; -fPIC suffices. GCC normally uses a single instruction to load values from the GOT. While this is relatively efficient, it only works if the GOT is smaller than about 64k. Anything larger causes the linker to report an error such as: relocation truncated to fit: R_68K_GOT16O foobar If this happens, you should recompile your code with -mxgot. It should then work with very large GOTs. However, code generated with -mxgot is less efficient, since it takes 4 instructions to fetch the value of a global symbol. Note that some linkers, including newer versions of the GNU linker, can create multiple GOTs and sort GOT entries. If you have such a linker, you should only need to use -mxgot when compiling a single object file that accesses more than 8192 GOT entries. Very few do. These options have no effect unless GCC is generating position-independent code. -mlong-jump-table-offsets Use 32-bit offsets in "switch" tables. The default is to use 16-bit offsets. MCore Options These are the -m options defined for the Motorola M*Core processors. -mhardlit -mno-hardlit Inline constants into the code stream if it can be done in two instructions or less. -mdiv -mno-div Use the divide instruction. (Enabled by default). -mrelax-immediate -mno-relax-immediate Allow arbitrary-sized immediates in bit operations. -mwide-bitfields -mno-wide-bitfields Always treat bit-fields as "int"-sized. -m4byte-functions -mno-4byte-functions Force all functions to be aligned to a 4-byte boundary. -mcallgraph-data -mno-callgraph-data Emit callgraph information. -mslow-bytes -mno-slow-bytes Prefer word access when reading byte quantities. -mlittle-endian -mbig-endian Generate code for a little-endian target. -m210 -m340 Generate code for the 210 processor. -mno-lsim Assume that runtime support has been provided and so omit the simulator library (libsim.a) from the linker command line. -mstack-increment=size Set the maximum amount for a single stack increment operation. Large values can increase the speed of programs that contain functions that need a large amount of stack space, but they can also trigger a segmentation fault if the stack is extended too much. The default value is 0x1000. MeP Options -mabsdiff Enables the "abs" instruction, which is the absolute difference between two registers. -mall-opts Enables all the optional instructions---average, multiply, divide, bit operations, leading zero, absolute difference, min/max, clip, and saturation. -maverage Enables the "ave" instruction, which computes the average of two registers. -mbased=n Variables of size n bytes or smaller are placed in the ".based" section by default. Based variables use the $tp register as a base register, and there is a 128-byte limit to the ".based" section. -mbitops Enables the bit operation instructions---bit test ("btstm"), set ("bsetm"), clear ("bclrm"), invert ("bnotm"), and test- and-set ("tas"). -mc=name Selects which section constant data is placed in. name may be tiny, near, or far. -mclip Enables the "clip" instruction. Note that -mclip is not useful unless you also provide -mminmax. -mconfig=name Selects one of the built-in core configurations. Each MeP chip has one or more modules in it; each module has a core CPU and a variety of coprocessors, optional instructions, and peripherals. The "MeP-Integrator" tool, not part of GCC, provides these configurations through this option; using this option is the same as using all the corresponding command- line options. The default configuration is default. -mcop Enables the coprocessor instructions. By default, this is a 32-bit coprocessor. Note that the coprocessor is normally enabled via the -mconfig= option. -mcop32 Enables the 32-bit coprocessor's instructions. -mcop64 Enables the 64-bit coprocessor's instructions. -mivc2 Enables IVC2 scheduling. IVC2 is a 64-bit VLIW coprocessor. -mdc Causes constant variables to be placed in the ".near" section. -mdiv Enables the "div" and "divu" instructions. -meb Generate big-endian code. -mel Generate little-endian code. -mio-volatile Tells the compiler that any variable marked with the "io" attribute is to be considered volatile. -ml Causes variables to be assigned to the ".far" section by default. -mleadz Enables the "leadz" (leading zero) instruction. -mm Causes variables to be assigned to the ".near" section by default. -mminmax Enables the "min" and "max" instructions. -mmult Enables the multiplication and multiply-accumulate instructions. -mno-opts Disables all the optional instructions enabled by -mall-opts. -mrepeat Enables the "repeat" and "erepeat" instructions, used for low-overhead looping. -ms Causes all variables to default to the ".tiny" section. Note that there is a 65536-byte limit to this section. Accesses to these variables use the %gp base register. -msatur Enables the saturation instructions. Note that the compiler does not currently generate these itself, but this option is included for compatibility with other tools, like "as". -msdram Link the SDRAM-based runtime instead of the default ROM-based runtime. -msim Link the simulator run-time libraries. -msimnovec Link the simulator runtime libraries, excluding built-in support for reset and exception vectors and tables. -mtf Causes all functions to default to the ".far" section. Without this option, functions default to the ".near" section. -mtiny=n Variables that are n bytes or smaller are allocated to the ".tiny" section. These variables use the $gp base register. The default for this option is 4, but note that there's a 65536-byte limit to the ".tiny" section. MicroBlaze Options -msoft-float Use software emulation for floating point (default). -mhard-float Use hardware floating-point instructions. -mmemcpy Do not optimize block moves, use "memcpy". -mno-clearbss This option is deprecated. Use -fno-zero-initialized-in-bss instead. -mcpu=cpu-type Use features of, and schedule code for, the given CPU. Supported values are in the format vX.YY.Z, where X is a major version, YY is the minor version, and Z is compatibility code. Example values are v3.00.a, v4.00.b, v5.00.a, v5.00.b, v6.00.a. -mxl-soft-mul Use software multiply emulation (default). -mxl-soft-div Use software emulation for divides (default). -mxl-barrel-shift Use the hardware barrel shifter. -mxl-pattern-compare Use pattern compare instructions. -msmall-divides Use table lookup optimization for small signed integer divisions. -mxl-stack-check This option is deprecated. Use -fstack-check instead. -mxl-gp-opt Use GP-relative ".sdata"/".sbss" sections. -mxl-multiply-high Use multiply high instructions for high part of 32x32 multiply. -mxl-float-convert Use hardware floating-point conversion instructions. -mxl-float-sqrt Use hardware floating-point square root instruction. -mbig-endian Generate code for a big-endian target. -mlittle-endian Generate code for a little-endian target. -mxl-reorder Use reorder instructions (swap and byte reversed load/store). -mxl-mode-app-model Select application model app-model. Valid models are executable normal executable (default), uses startup code crt0.o. -mpic-data-is-text-relative Assume that the displacement between the text and data segments is fixed at static link time. This allows data to be referenced by offset from start of text address instead of GOT since PC-relative addressing is not supported. xmdstub for use with Xilinx Microprocessor Debugger (XMD) based software intrusive debug agent called xmdstub. This uses startup file crt1.o and sets the start address of the program to 0x800. bootstrap for applications that are loaded using a bootloader. This model uses startup file crt2.o which does not contain a processor reset vector handler. This is suitable for transferring control on a processor reset to the bootloader rather than the application. novectors for applications that do not require any of the MicroBlaze vectors. This option may be useful for applications running within a monitoring application. This model uses crt3.o as a startup file. Option -xl-mode-app-model is a deprecated alias for -mxl-mode-app-model. MIPS Options -EB Generate big-endian code. -EL Generate little-endian code. This is the default for mips*el-*-* configurations. -march=arch Generate code that runs on arch, which can be the name of a generic MIPS ISA, or the name of a particular processor. The ISA names are: mips1, mips2, mips3, mips4, mips32, mips32r2, mips32r3, mips32r5, mips32r6, mips64, mips64r2, mips64r3, mips64r5 and mips64r6. The processor names are: 4kc, 4km, 4kp, 4ksc, 4kec, 4kem, 4kep, 4ksd, 5kc, 5kf, 20kc, 24kc, 24kf2_1, 24kf1_1, 24kec, 24kef2_1, 24kef1_1, 34kc, 34kf2_1, 34kf1_1, 34kn, 74kc, 74kf2_1, 74kf1_1, 74kf3_2, 1004kc, 1004kf2_1, 1004kf1_1, i6400, i6500, interaptiv, loongson2e, loongson2f, loongson3a, gs464, gs464e, gs264e, m4k, m14k, m14kc, m14ke, m14kec, m5100, m5101, octeon, octeon+, octeon2, octeon3, orion, p5600, p6600, r2000, r3000, r3900, r4000, r4400, r4600, r4650, r4700, r5900, r6000, r8000, rm7000, rm9000, r10000, r12000, r14000, r16000, sb1, sr71000, vr4100, vr4111, vr4120, vr4130, vr4300, vr5000, vr5400, vr5500, xlr and xlp. The special value from-abi selects the most compatible architecture for the selected ABI (that is, mips1 for 32-bit ABIs and mips3 for 64-bit ABIs). The native Linux/GNU toolchain also supports the value native, which selects the best architecture option for the host processor. -march=native has no effect if GCC does not recognize the processor. In processor names, a final 000 can be abbreviated as k (for example, -march=r2k). Prefixes are optional, and vr may be written r. Names of the form nf2_1 refer to processors with FPUs clocked at half the rate of the core, names of the form nf1_1 refer to processors with FPUs clocked at the same rate as the core, and names of the form nf3_2 refer to processors with FPUs clocked a ratio of 3:2 with respect to the core. For compatibility reasons, nf is accepted as a synonym for nf2_1 while nx and bfx are accepted as synonyms for nf1_1. GCC defines two macros based on the value of this option. The first is "_MIPS_ARCH", which gives the name of target architecture, as a string. The second has the form "_MIPS_ARCH_foo", where foo is the capitalized value of "_MIPS_ARCH". For example, -march=r2000 sets "_MIPS_ARCH" to "r2000" and defines the macro "_MIPS_ARCH_R2000". Note that the "_MIPS_ARCH" macro uses the processor names given above. In other words, it has the full prefix and does not abbreviate 000 as k. In the case of from-abi, the macro names the resolved architecture (either "mips1" or "mips3"). It names the default architecture when no -march option is given. -mtune=arch Optimize for arch. Among other things, this option controls the way instructions are scheduled, and the perceived cost of arithmetic operations. The list of arch values is the same as for -march. When this option is not used, GCC optimizes for the processor specified by -march. By using -march and -mtune together, it is possible to generate code that runs on a family of processors, but optimize the code for one particular member of that family. -mtune defines the macros "_MIPS_TUNE" and "_MIPS_TUNE_foo", which work in the same way as the -march ones described above. -mips1 Equivalent to -march=mips1. -mips2 Equivalent to -march=mips2. -mips3 Equivalent to -march=mips3. -mips4 Equivalent to -march=mips4. -mips32 Equivalent to -march=mips32. -mips32r3 Equivalent to -march=mips32r3. -mips32r5 Equivalent to -march=mips32r5. -mips32r6 Equivalent to -march=mips32r6. -mips64 Equivalent to -march=mips64. -mips64r2 Equivalent to -march=mips64r2. -mips64r3 Equivalent to -march=mips64r3. -mips64r5 Equivalent to -march=mips64r5. -mips64r6 Equivalent to -march=mips64r6. -mips16 -mno-mips16 Generate (do not generate) MIPS16 code. If GCC is targeting a MIPS32 or MIPS64 architecture, it makes use of the MIPS16e ASE. MIPS16 code generation can also be controlled on a per- function basis by means of "mips16" and "nomips16" attributes. -mflip-mips16 Generate MIPS16 code on alternating functions. This option is provided for regression testing of mixed MIPS16/non-MIPS16 code generation, and is not intended for ordinary use in compiling user code. -minterlink-compressed -mno-interlink-compressed Require (do not require) that code using the standard (uncompressed) MIPS ISA be link-compatible with MIPS16 and microMIPS code, and vice versa. For example, code using the standard ISA encoding cannot jump directly to MIPS16 or microMIPS code; it must either use a call or an indirect jump. -minterlink-compressed therefore disables direct jumps unless GCC knows that the target of the jump is not compressed. -minterlink-mips16 -mno-interlink-mips16 Aliases of -minterlink-compressed and -mno-interlink-compressed. These options predate the microMIPS ASE and are retained for backwards compatibility. -mabi=32 -mabi=o64 -mabi=n32 -mabi=64 -mabi=eabi Generate code for the given ABI. Note that the EABI has a 32-bit and a 64-bit variant. GCC normally generates 64-bit code when you select a 64-bit architecture, but you can use -mgp32 to get 32-bit code instead. For information about the O64 ABI, see <http://gcc.gnu.org/projects/mipso64-abi.html >. GCC supports a variant of the o32 ABI in which floating-point registers are 64 rather than 32 bits wide. You can select this combination with -mabi=32 -mfp64. This ABI relies on the "mthc1" and "mfhc1" instructions and is therefore only supported for MIPS32R2, MIPS32R3 and MIPS32R5 processors. The register assignments for arguments and return values remain the same, but each scalar value is passed in a single 64-bit register rather than a pair of 32-bit registers. For example, scalar floating-point values are returned in $f0 only, not a $f0/$f1 pair. The set of call-saved registers also remains the same in that the even-numbered double- precision registers are saved. Two additional variants of the o32 ABI are supported to enable a transition from 32-bit to 64-bit registers. These are FPXX (-mfpxx) and FP64A (-mfp64 -mno-odd-spreg). The FPXX extension mandates that all code must execute correctly when run using 32-bit or 64-bit registers. The code can be interlinked with either FP32 or FP64, but not both. The FP64A extension is similar to the FP64 extension but forbids the use of odd-numbered single-precision registers. This can be used in conjunction with the "FRE" mode of FPUs in MIPS32R5 processors and allows both FP32 and FP64A code to interlink and run in the same process without changing FPU modes. -mabicalls -mno-abicalls Generate (do not generate) code that is suitable for SVR4-style dynamic objects. -mabicalls is the default for SVR4-based systems. -mshared -mno-shared Generate (do not generate) code that is fully position- independent, and that can therefore be linked into shared libraries. This option only affects -mabicalls. All -mabicalls code has traditionally been position- independent, regardless of options like -fPIC and -fpic. However, as an extension, the GNU toolchain allows executables to use absolute accesses for locally-binding symbols. It can also use shorter GP initialization sequences and generate direct calls to locally-defined functions. This mode is selected by -mno-shared. -mno-shared depends on binutils 2.16 or higher and generates objects that can only be linked by the GNU linker. However, the option does not affect the ABI of the final executable; it only affects the ABI of relocatable objects. Using -mno-shared generally makes executables both smaller and quicker. -mshared is the default. -mplt -mno-plt Assume (do not assume) that the static and dynamic linkers support PLTs and copy relocations. This option only affects -mno-shared -mabicalls. For the n64 ABI, this option has no effect without -msym32. You can make -mplt the default by configuring GCC with --with-mips-plt. The default is -mno-plt otherwise. -mxgot -mno-xgot Lift (do not lift) the usual restrictions on the size of the global offset table. GCC normally uses a single instruction to load values from the GOT. While this is relatively efficient, it only works if the GOT is smaller than about 64k. Anything larger causes the linker to report an error such as: relocation truncated to fit: R_MIPS_GOT16 foobar If this happens, you should recompile your code with -mxgot. This works with very large GOTs, although the code is also less efficient, since it takes three instructions to fetch the value of a global symbol. Note that some linkers can create multiple GOTs. If you have such a linker, you should only need to use -mxgot when a single object file accesses more than 64k's worth of GOT entries. Very few do. These options have no effect unless GCC is generating position independent code. -mgp32 Assume that general-purpose registers are 32 bits wide. -mgp64 Assume that general-purpose registers are 64 bits wide. -mfp32 Assume that floating-point registers are 32 bits wide. -mfp64 Assume that floating-point registers are 64 bits wide. -mfpxx Do not assume the width of floating-point registers. -mhard-float Use floating-point coprocessor instructions. -msoft-float Do not use floating-point coprocessor instructions. Implement floating-point calculations using library calls instead. -mno-float Equivalent to -msoft-float, but additionally asserts that the program being compiled does not perform any floating-point operations. This option is presently supported only by some bare-metal MIPS configurations, where it may select a special set of libraries that lack all floating-point support (including, for example, the floating-point "printf" formats). If code compiled with -mno-float accidentally contains floating-point operations, it is likely to suffer a link-time or run-time failure. -msingle-float Assume that the floating-point coprocessor only supports single-precision operations. -mdouble-float Assume that the floating-point coprocessor supports double- precision operations. This is the default. -modd-spreg -mno-odd-spreg Enable the use of odd-numbered single-precision floating- point registers for the o32 ABI. This is the default for processors that are known to support these registers. When using the o32 FPXX ABI, -mno-odd-spreg is set by default. -mabs=2008 -mabs=legacy These options control the treatment of the special not-a- number (NaN) IEEE 754 floating-point data with the "abs.fmt" and "neg.fmt" machine instructions. By default or when -mabs=legacy is used the legacy treatment is selected. In this case these instructions are considered arithmetic and avoided where correct operation is required and the input operand might be a NaN. A longer sequence of instructions that manipulate the sign bit of floating-point datum manually is used instead unless the -ffinite-math-only option has also been specified. The -mabs=2008 option selects the IEEE 754-2008 treatment. In this case these instructions are considered non-arithmetic and therefore operating correctly in all cases, including in particular where the input operand is a NaN. These instructions are therefore always used for the respective operations. -mnan=2008 -mnan=legacy These options control the encoding of the special not-a- number (NaN) IEEE 754 floating-point data. The -mnan=legacy option selects the legacy encoding. In this case quiet NaNs (qNaNs) are denoted by the first bit of their trailing significand field being 0, whereas signaling NaNs (sNaNs) are denoted by the first bit of their trailing significand field being 1. The -mnan=2008 option selects the IEEE 754-2008 encoding. In this case qNaNs are denoted by the first bit of their trailing significand field being 1, whereas sNaNs are denoted by the first bit of their trailing significand field being 0. The default is -mnan=legacy unless GCC has been configured with --with-nan=2008. -mllsc -mno-llsc Use (do not use) ll, sc, and sync instructions to implement atomic memory built-in functions. When neither option is specified, GCC uses the instructions if the target architecture supports them. -mllsc is useful if the runtime environment can emulate the instructions and -mno-llsc can be useful when compiling for nonstandard ISAs. You can make either option the default by configuring GCC with --with-llsc and --without-llsc respectively. --with-llsc is the default for some configurations; see the installation documentation for details. -mdsp -mno-dsp Use (do not use) revision 1 of the MIPS DSP ASE. This option defines the preprocessor macro "__mips_dsp". It also defines "__mips_dsp_rev" to 1. -mdspr2 -mno-dspr2 Use (do not use) revision 2 of the MIPS DSP ASE. This option defines the preprocessor macros "__mips_dsp" and "__mips_dspr2". It also defines "__mips_dsp_rev" to 2. -msmartmips -mno-smartmips Use (do not use) the MIPS SmartMIPS ASE. -mpaired-single -mno-paired-single Use (do not use) paired-single floating-point instructions. This option requires hardware floating-point support to be enabled. -mdmx -mno-mdmx Use (do not use) MIPS Digital Media Extension instructions. This option can only be used when generating 64-bit code and requires hardware floating-point support to be enabled. -mips3d -mno-mips3d Use (do not use) the MIPS-3D ASE. The option -mips3d implies -mpaired-single. -mmicromips -mno-micromips Generate (do not generate) microMIPS code. MicroMIPS code generation can also be controlled on a per- function basis by means of "micromips" and "nomicromips" attributes. -mmt -mno-mt Use (do not use) MT Multithreading instructions. -mmcu -mno-mcu Use (do not use) the MIPS MCU ASE instructions. -meva -mno-eva Use (do not use) the MIPS Enhanced Virtual Addressing instructions. -mvirt -mno-virt Use (do not use) the MIPS Virtualization (VZ) instructions. -mxpa -mno-xpa Use (do not use) the MIPS eXtended Physical Address (XPA) instructions. -mcrc -mno-crc Use (do not use) the MIPS Cyclic Redundancy Check (CRC) instructions. -mginv -mno-ginv Use (do not use) the MIPS Global INValidate (GINV) instructions. -mloongson-mmi -mno-loongson-mmi Use (do not use) the MIPS Loongson MultiMedia extensions Instructions (MMI). -mloongson-ext -mno-loongson-ext Use (do not use) the MIPS Loongson EXTensions (EXT) instructions. -mloongson-ext2 -mno-loongson-ext2 Use (do not use) the MIPS Loongson EXTensions r2 (EXT2) instructions. -mlong64 Force "long" types to be 64 bits wide. See -mlong32 for an explanation of the default and the way that the pointer size is determined. -mlong32 Force "long", "int", and pointer types to be 32 bits wide. The default size of "int"s, "long"s and pointers depends on the ABI. All the supported ABIs use 32-bit "int"s. The n64 ABI uses 64-bit "long"s, as does the 64-bit EABI; the others use 32-bit "long"s. Pointers are the same size as "long"s, or the same size as integer registers, whichever is smaller. -msym32 -mno-sym32 Assume (do not assume) that all symbols have 32-bit values, regardless of the selected ABI. This option is useful in combination with -mabi=64 and -mno-abicalls because it allows GCC to generate shorter and faster references to symbolic addresses. -G num Put definitions of externally-visible data in a small data section if that data is no bigger than num bytes. GCC can then generate more efficient accesses to the data; see -mgpopt for details. The default -G option depends on the configuration. -mlocal-sdata -mno-local-sdata Extend (do not extend) the -G behavior to local data too, such as to static variables in C. -mlocal-sdata is the default for all configurations. If the linker complains that an application is using too much small data, you might want to try rebuilding the less performance-critical parts with -mno-local-sdata. You might also want to build large libraries with -mno-local-sdata, so that the libraries leave more room for the main program. -mextern-sdata -mno-extern-sdata Assume (do not assume) that externally-defined data is in a small data section if the size of that data is within the -G limit. -mextern-sdata is the default for all configurations. If you compile a module Mod with -mextern-sdata -G num -mgpopt, and Mod references a variable Var that is no bigger than num bytes, you must make sure that Var is placed in a small data section. If Var is defined by another module, you must either compile that module with a high-enough -G setting or attach a "section" attribute to Var's definition. If Var is common, you must link the application with a high-enough -G setting. The easiest way of satisfying these restrictions is to compile and link every module with the same -G option. However, you may wish to build a library that supports several different small data limits. You can do this by compiling the library with the highest supported -G setting and additionally using -mno-extern-sdata to stop the library from making assumptions about externally-defined data. -mgpopt -mno-gpopt Use (do not use) GP-relative accesses for symbols that are known to be in a small data section; see -G, -mlocal-sdata and -mextern-sdata. -mgpopt is the default for all configurations. -mno-gpopt is useful for cases where the $gp register might not hold the value of "_gp". For example, if the code is part of a library that might be used in a boot monitor, programs that call boot monitor routines pass an unknown value in $gp. (In such situations, the boot monitor itself is usually compiled with -G0.) -mno-gpopt implies -mno-local-sdata and -mno-extern-sdata. -membedded-data -mno-embedded-data Allocate variables to the read-only data section first if possible, then next in the small data section if possible, otherwise in data. This gives slightly slower code than the default, but reduces the amount of RAM required when executing, and thus may be preferred for some embedded systems. -muninit-const-in-rodata -mno-uninit-const-in-rodata Put uninitialized "const" variables in the read-only data section. This option is only meaningful in conjunction with -membedded-data. -mcode-readable=setting Specify whether GCC may generate code that reads from executable sections. There are three possible settings: -mcode-readable=yes Instructions may freely access executable sections. This is the default setting. -mcode-readable=pcrel MIPS16 PC-relative load instructions can access executable sections, but other instructions must not do so. This option is useful on 4KSc and 4KSd processors when the code TLBs have the Read Inhibit bit set. It is also useful on processors that can be configured to have a dual instruction/data SRAM interface and that, like the M4K, automatically redirect PC-relative loads to the instruction RAM. -mcode-readable=no Instructions must not access executable sections. This option can be useful on targets that are configured to have a dual instruction/data SRAM interface but that (unlike the M4K) do not automatically redirect PC- relative loads to the instruction RAM. -msplit-addresses -mno-split-addresses Enable (disable) use of the "%hi()" and "%lo()" assembler relocation operators. This option has been superseded by -mexplicit-relocs but is retained for backwards compatibility. -mexplicit-relocs -mno-explicit-relocs Use (do not use) assembler relocation operators when dealing with symbolic addresses. The alternative, selected by -mno-explicit-relocs, is to use assembler macros instead. -mexplicit-relocs is the default if GCC was configured to use an assembler that supports relocation operators. -mcheck-zero-division -mno-check-zero-division Trap (do not trap) on integer division by zero. The default is -mcheck-zero-division. -mdivide-traps -mdivide-breaks MIPS systems check for division by zero by generating either a conditional trap or a break instruction. Using traps results in smaller code, but is only supported on MIPS II and later. Also, some versions of the Linux kernel have a bug that prevents trap from generating the proper signal ("SIGFPE"). Use -mdivide-traps to allow conditional traps on architectures that support them and -mdivide-breaks to force the use of breaks. The default is usually -mdivide-traps, but this can be overridden at configure time using --with-divide=breaks. Divide-by-zero checks can be completely disabled using -mno-check-zero-division. -mload-store-pairs -mno-load-store-pairs Enable (disable) an optimization that pairs consecutive load or store instructions to enable load/store bonding. This option is enabled by default but only takes effect when the selected architecture is known to support bonding. -mmemcpy -mno-memcpy Force (do not force) the use of "memcpy" for non-trivial block moves. The default is -mno-memcpy, which allows GCC to inline most constant-sized copies. -mlong-calls -mno-long-calls Disable (do not disable) use of the "jal" instruction. Calling functions using "jal" is more efficient but requires the caller and callee to be in the same 256 megabyte segment. This option has no effect on abicalls code. The default is -mno-long-calls. -mmad -mno-mad Enable (disable) use of the "mad", "madu" and "mul" instructions, as provided by the R4650 ISA. -mimadd -mno-imadd Enable (disable) use of the "madd" and "msub" integer instructions. The default is -mimadd on architectures that support "madd" and "msub" except for the 74k architecture where it was found to generate slower code. -mfused-madd -mno-fused-madd Enable (disable) use of the floating-point multiply- accumulate instructions, when they are available. The default is -mfused-madd. On the R8000 CPU when multiply-accumulate instructions are used, the intermediate product is calculated to infinite precision and is not subject to the FCSR Flush to Zero bit. This may be undesirable in some circumstances. On other processors the result is numerically identical to the equivalent computation using separate multiply, add, subtract and negate instructions. -nocpp Tell the MIPS assembler to not run its preprocessor over user assembler files (with a .s suffix) when assembling them. -mfix-24k -mno-fix-24k Work around the 24K E48 (lost data on stores during refill) errata. The workarounds are implemented by the assembler rather than by GCC. -mfix-r4000 -mno-fix-r4000 Work around certain R4000 CPU errata: - A double-word or a variable shift may give an incorrect result if executed immediately after starting an integer division. - A double-word or a variable shift may give an incorrect result if executed while an integer multiplication is in progress. - An integer division may give an incorrect result if started in a delay slot of a taken branch or a jump. -mfix-r4400 -mno-fix-r4400 Work around certain R4400 CPU errata: - A double-word or a variable shift may give an incorrect result if executed immediately after starting an integer division. -mfix-r10000 -mno-fix-r10000 Work around certain R10000 errata: - "ll"/"sc" sequences may not behave atomically on revisions prior to 3.0. They may deadlock on revisions 2.6 and earlier. This option can only be used if the target architecture supports branch-likely instructions. -mfix-r10000 is the default when -march=r10000 is used; -mno-fix-r10000 is the default otherwise. -mfix-r5900 -mno-fix-r5900 Do not attempt to schedule the preceding instruction into the delay slot of a branch instruction placed at the end of a short loop of six instructions or fewer and always schedule a "nop" instruction there instead. The short loop bug under certain conditions causes loops to execute only once or twice, due to a hardware bug in the R5900 chip. The workaround is implemented by the assembler rather than by GCC. -mfix-rm7000 -mno-fix-rm7000 Work around the RM7000 "dmult"/"dmultu" errata. The workarounds are implemented by the assembler rather than by GCC. -mfix-vr4120 -mno-fix-vr4120 Work around certain VR4120 errata: - "dmultu" does not always produce the correct result. - "div" and "ddiv" do not always produce the correct result if one of the operands is negative. The workarounds for the division errata rely on special functions in libgcc.a. At present, these functions are only provided by the "mips64vr*-elf" configurations. Other VR4120 errata require a NOP to be inserted between certain pairs of instructions. These errata are handled by the assembler, not by GCC itself. -mfix-vr4130 Work around the VR4130 "mflo"/"mfhi" errata. The workarounds are implemented by the assembler rather than by GCC, although GCC avoids using "mflo" and "mfhi" if the VR4130 "macc", "macchi", "dmacc" and "dmacchi" instructions are available instead. -mfix-sb1 -mno-fix-sb1 Work around certain SB-1 CPU core errata. (This flag currently works around the SB-1 revision 2 "F1" and "F2" floating-point errata.) -mr10k-cache-barrier=setting Specify whether GCC should insert cache barriers to avoid the side effects of speculation on R10K processors. In common with many processors, the R10K tries to predict the outcome of a conditional branch and speculatively executes instructions from the "taken" branch. It later aborts these instructions if the predicted outcome is wrong. However, on the R10K, even aborted instructions can have side effects. This problem only affects kernel stores and, depending on the system, kernel loads. As an example, a speculatively- executed store may load the target memory into cache and mark the cache line as dirty, even if the store itself is later aborted. If a DMA operation writes to the same area of memory before the "dirty" line is flushed, the cached data overwrites the DMA-ed data. See the R10K processor manual for a full description, including other potential problems. One workaround is to insert cache barrier instructions before every memory access that might be speculatively executed and that might have side effects even if aborted. -mr10k-cache-barrier=setting controls GCC's implementation of this workaround. It assumes that aborted accesses to any byte in the following regions does not have side effects: 1. the memory occupied by the current function's stack frame; 2. the memory occupied by an incoming stack argument; 3. the memory occupied by an object with a link-time- constant address. It is the kernel's responsibility to ensure that speculative accesses to these regions are indeed safe. If the input program contains a function declaration such as: void foo (void); then the implementation of "foo" must allow "j foo" and "jal foo" to be executed speculatively. GCC honors this restriction for functions it compiles itself. It expects non-GCC functions (such as hand-written assembly code) to do the same. The option has three forms: -mr10k-cache-barrier=load-store Insert a cache barrier before a load or store that might be speculatively executed and that might have side effects even if aborted. -mr10k-cache-barrier=store Insert a cache barrier before a store that might be speculatively executed and that might have side effects even if aborted. -mr10k-cache-barrier=none Disable the insertion of cache barriers. This is the default setting. -mflush-func=func -mno-flush-func Specifies the function to call to flush the I and D caches, or to not call any such function. If called, the function must take the same arguments as the common "_flush_func", that is, the address of the memory range for which the cache is being flushed, the size of the memory range, and the number 3 (to flush both caches). The default depends on the target GCC was configured for, but commonly is either "_flush_func" or "__cpu_flush". mbranch-cost=num Set the cost of branches to roughly num "simple" instructions. This cost is only a heuristic and is not guaranteed to produce consistent results across releases. A zero cost redundantly selects the default, which is based on the -mtune setting. -mbranch-likely -mno-branch-likely Enable or disable use of Branch Likely instructions, regardless of the default for the selected architecture. By default, Branch Likely instructions may be generated if they are supported by the selected architecture. An exception is for the MIPS32 and MIPS64 architectures and processors that implement those architectures; for those, Branch Likely instructions are not be generated by default because the MIPS32 and MIPS64 architectures specifically deprecate their use. -mcompact-branches=never -mcompact-branches=optimal -mcompact-branches=always These options control which form of branches will be generated. The default is -mcompact-branches=optimal. The -mcompact-branches=never option ensures that compact branch instructions will never be generated. The -mcompact-branches=always option ensures that a compact branch instruction will be generated if available. If a compact branch instruction is not available, a delay slot form of the branch will be used instead. This option is supported from MIPS Release 6 onwards. The -mcompact-branches=optimal option will cause a delay slot branch to be used if one is available in the current ISA and the delay slot is successfully filled. If the delay slot is not filled, a compact branch will be chosen if one is available. -mfp-exceptions -mno-fp-exceptions Specifies whether FP exceptions are enabled. This affects how FP instructions are scheduled for some processors. The default is that FP exceptions are enabled. For instance, on the SB-1, if FP exceptions are disabled, and we are emitting 64-bit code, then we can use both FP pipes. Otherwise, we can only use one FP pipe. -mvr4130-align -mno-vr4130-align The VR4130 pipeline is two-way superscalar, but can only issue two instructions together if the first one is 8-byte aligned. When this option is enabled, GCC aligns pairs of instructions that it thinks should execute in parallel. This option only has an effect when optimizing for the VR4130. It normally makes code faster, but at the expense of making it bigger. It is enabled by default at optimization level -O3. -msynci -mno-synci Enable (disable) generation of "synci" instructions on architectures that support it. The "synci" instructions (if enabled) are generated when "__builtin___clear_cache" is compiled. This option defaults to -mno-synci, but the default can be overridden by configuring GCC with --with-synci. When compiling code for single processor systems, it is generally safe to use "synci". However, on many multi-core (SMP) systems, it does not invalidate the instruction caches on all cores and may lead to undefined behavior. -mrelax-pic-calls -mno-relax-pic-calls Try to turn PIC calls that are normally dispatched via register $25 into direct calls. This is only possible if the linker can resolve the destination at link time and if the destination is within range for a direct call. -mrelax-pic-calls is the default if GCC was configured to use an assembler and a linker that support the ".reloc" assembly directive and -mexplicit-relocs is in effect. With -mno-explicit-relocs, this optimization can be performed by the assembler and the linker alone without help from the compiler. -mmcount-ra-address -mno-mcount-ra-address Emit (do not emit) code that allows "_mcount" to modify the calling function's return address. When enabled, this option extends the usual "_mcount" interface with a new ra-address parameter, which has type "intptr_t *" and is passed in register $12. "_mcount" can then modify the return address by doing both of the following: * Returning the new address in register $31. * Storing the new address in "*ra-address", if ra-address is nonnull. The default is -mno-mcount-ra-address. -mframe-header-opt -mno-frame-header-opt Enable (disable) frame header optimization in the o32 ABI. When using the o32 ABI, calling functions will allocate 16 bytes on the stack for the called function to write out register arguments. When enabled, this optimization will suppress the allocation of the frame header if it can be determined that it is unused. This optimization is off by default at all optimization levels. -mlxc1-sxc1 -mno-lxc1-sxc1 When applicable, enable (disable) the generation of "lwxc1", "swxc1", "ldxc1", "sdxc1" instructions. Enabled by default. -mmadd4 -mno-madd4 When applicable, enable (disable) the generation of 4-operand "madd.s", "madd.d" and related instructions. Enabled by default. MMIX Options These options are defined for the MMIX: -mlibfuncs -mno-libfuncs Specify that intrinsic library functions are being compiled, passing all values in registers, no matter the size. -mepsilon -mno-epsilon Generate floating-point comparison instructions that compare with respect to the "rE" epsilon register. -mabi=mmixware -mabi=gnu Generate code that passes function parameters and return values that (in the called function) are seen as registers $0 and up, as opposed to the GNU ABI which uses global registers $231 and up. -mzero-extend -mno-zero-extend When reading data from memory in sizes shorter than 64 bits, use (do not use) zero-extending load instructions by default, rather than sign-extending ones. -mknuthdiv -mno-knuthdiv Make the result of a division yielding a remainder have the same sign as the divisor. With the default, -mno-knuthdiv, the sign of the remainder follows the sign of the dividend. Both methods are arithmetically valid, the latter being almost exclusively used. -mtoplevel-symbols -mno-toplevel-symbols Prepend (do not prepend) a : to all global symbols, so the assembly code can be used with the "PREFIX" assembly directive. -melf Generate an executable in the ELF format, rather than the default mmo format used by the mmix simulator. -mbranch-predict -mno-branch-predict Use (do not use) the probable-branch instructions, when static branch prediction indicates a probable branch. -mbase-addresses -mno-base-addresses Generate (do not generate) code that uses base addresses. Using a base address automatically generates a request (handled by the assembler and the linker) for a constant to be set up in a global register. The register is used for one or more base address requests within the range 0 to 255 from the value held in the register. The generally leads to short and fast code, but the number of different data items that can be addressed is limited. This means that a program that uses lots of static data may require -mno-base-addresses. -msingle-exit -mno-single-exit Force (do not force) generated code to have a single exit point in each function. MN10300 Options These -m options are defined for Matsushita MN10300 architectures: -mmult-bug Generate code to avoid bugs in the multiply instructions for the MN10300 processors. This is the default. -mno-mult-bug Do not generate code to avoid bugs in the multiply instructions for the MN10300 processors. -mam33 Generate code using features specific to the AM33 processor. -mno-am33 Do not generate code using features specific to the AM33 processor. This is the default. -mam33-2 Generate code using features specific to the AM33/2.0 processor. -mam34 Generate code using features specific to the AM34 processor. -mtune=cpu-type Use the timing characteristics of the indicated CPU type when scheduling instructions. This does not change the targeted processor type. The CPU type must be one of mn10300, am33, am33-2 or am34. -mreturn-pointer-on-d0 When generating a function that returns a pointer, return the pointer in both "a0" and "d0". Otherwise, the pointer is returned only in "a0", and attempts to call such functions without a prototype result in errors. Note that this option is on by default; use -mno-return-pointer-on-d0 to disable it. -mno-crt0 Do not link in the C run-time initialization object file. -mrelax Indicate to the linker that it should perform a relaxation optimization pass to shorten branches, calls and absolute memory addresses. This option only has an effect when used on the command line for the final link step. This option makes symbolic debugging impossible. -mliw Allow the compiler to generate Long Instruction Word instructions if the target is the AM33 or later. This is the default. This option defines the preprocessor macro "__LIW__". -mno-liw Do not allow the compiler to generate Long Instruction Word instructions. This option defines the preprocessor macro "__NO_LIW__". -msetlb Allow the compiler to generate the SETLB and Lcc instructions if the target is the AM33 or later. This is the default. This option defines the preprocessor macro "__SETLB__". -mno-setlb Do not allow the compiler to generate SETLB or Lcc instructions. This option defines the preprocessor macro "__NO_SETLB__". Moxie Options -meb Generate big-endian code. This is the default for moxie-*-* configurations. -mel Generate little-endian code. -mmul.x Generate mul.x and umul.x instructions. This is the default for moxiebox-*-* configurations. -mno-crt0 Do not link in the C run-time initialization object file. MSP430 Options These options are defined for the MSP430: -masm-hex Force assembly output to always use hex constants. Normally such constants are signed decimals, but this option is available for testsuite and/or aesthetic purposes. -mmcu= Select the MCU to target. This is used to create a C preprocessor symbol based upon the MCU name, converted to upper case and pre- and post-fixed with __. This in turn is used by the msp430.h header file to select an MCU-specific supplementary header file. The option also sets the ISA to use. If the MCU name is one that is known to only support the 430 ISA then that is selected, otherwise the 430X ISA is selected. A generic MCU name of msp430 can also be used to select the 430 ISA. Similarly the generic msp430x MCU name selects the 430X ISA. In addition an MCU-specific linker script is added to the linker command line. The script's name is the name of the MCU with .ld appended. Thus specifying -mmcu=xxx on the gcc command line defines the C preprocessor symbol "__XXX__" and cause the linker to search for a script called xxx.ld. This option is also passed on to the assembler. -mwarn-mcu -mno-warn-mcu This option enables or disables warnings about conflicts between the MCU name specified by the -mmcu option and the ISA set by the -mcpu option and/or the hardware multiply support set by the -mhwmult option. It also toggles warnings about unrecognized MCU names. This option is on by default. -mcpu= Specifies the ISA to use. Accepted values are msp430, msp430x and msp430xv2. This option is deprecated. The -mmcu= option should be used to select the ISA. -msim Link to the simulator runtime libraries and linker script. Overrides any scripts that would be selected by the -mmcu= option. -mlarge Use large-model addressing (20-bit pointers, 32-bit "size_t"). -msmall Use small-model addressing (16-bit pointers, 16-bit "size_t"). -mrelax This option is passed to the assembler and linker, and allows the linker to perform certain optimizations that cannot be done until the final link. mhwmult= Describes the type of hardware multiply supported by the target. Accepted values are none for no hardware multiply, 16bit for the original 16-bit-only multiply supported by early MCUs. 32bit for the 16/32-bit multiply supported by later MCUs and f5series for the 16/32-bit multiply supported by F5-series MCUs. A value of auto can also be given. This tells GCC to deduce the hardware multiply support based upon the MCU name provided by the -mmcu option. If no -mmcu option is specified or if the MCU name is not recognized then no hardware multiply support is assumed. "auto" is the default setting. Hardware multiplies are normally performed by calling a library routine. This saves space in the generated code. When compiling at -O3 or higher however the hardware multiplier is invoked inline. This makes for bigger, but faster code. The hardware multiply routines disable interrupts whilst running and restore the previous interrupt state when they finish. This makes them safe to use inside interrupt handlers as well as in normal code. -minrt Enable the use of a minimum runtime environment - no static initializers or constructors. This is intended for memory- constrained devices. The compiler includes special symbols in some objects that tell the linker and runtime which code fragments are required. -mcode-region= -mdata-region= These options tell the compiler where to place functions and data that do not have one of the "lower", "upper", "either" or "section" attributes. Possible values are "lower", "upper", "either" or "any". The first three behave like the corresponding attribute. The fourth possible value - "any" - is the default. It leaves placement entirely up to the linker script and how it assigns the standard sections (".text", ".data", etc) to the memory regions. -msilicon-errata= This option passes on a request to assembler to enable the fixes for the named silicon errata. -msilicon-errata-warn= This option passes on a request to the assembler to enable warning messages when a silicon errata might need to be applied. NDS32 Options These options are defined for NDS32 implementations: -mbig-endian Generate code in big-endian mode. -mlittle-endian Generate code in little-endian mode. -mreduced-regs Use reduced-set registers for register allocation. -mfull-regs Use full-set registers for register allocation. -mcmov Generate conditional move instructions. -mno-cmov Do not generate conditional move instructions. -mext-perf Generate performance extension instructions. -mno-ext-perf Do not generate performance extension instructions. -mext-perf2 Generate performance extension 2 instructions. -mno-ext-perf2 Do not generate performance extension 2 instructions. -mext-string Generate string extension instructions. -mno-ext-string Do not generate string extension instructions. -mv3push Generate v3 push25/pop25 instructions. -mno-v3push Do not generate v3 push25/pop25 instructions. -m16-bit Generate 16-bit instructions. -mno-16-bit Do not generate 16-bit instructions. -misr-vector-size=num Specify the size of each interrupt vector, which must be 4 or 16. -mcache-block-size=num Specify the size of each cache block, which must be a power of 2 between 4 and 512. -march=arch Specify the name of the target architecture. -mcmodel=code-model Set the code model to one of small All the data and read-only data segments must be within 512KB addressing space. The text segment must be within 16MB addressing space. medium The data segment must be within 512KB while the read-only data segment can be within 4GB addressing space. The text segment should be still within 16MB addressing space. large All the text and data segments can be within 4GB addressing space. -mctor-dtor Enable constructor/destructor feature. -mrelax Guide linker to relax instructions. Nios II Options These are the options defined for the Altera Nios II processor. -G num Put global and static objects less than or equal to num bytes into the small data or BSS sections instead of the normal data or BSS sections. The default value of num is 8. -mgpopt=option -mgpopt -mno-gpopt Generate (do not generate) GP-relative accesses. The following option names are recognized: none Do not generate GP-relative accesses. local Generate GP-relative accesses for small data objects that are not external, weak, or uninitialized common symbols. Also use GP-relative addressing for objects that have been explicitly placed in a small data section via a "section" attribute. global As for local, but also generate GP-relative accesses for small data objects that are external, weak, or common. If you use this option, you must ensure that all parts of your program (including libraries) are compiled with the same -G setting. data Generate GP-relative accesses for all data objects in the program. If you use this option, the entire data and BSS segments of your program must fit in 64K of memory and you must use an appropriate linker script to allocate them within the addressable range of the global pointer. all Generate GP-relative addresses for function pointers as well as data pointers. If you use this option, the entire text, data, and BSS segments of your program must fit in 64K of memory and you must use an appropriate linker script to allocate them within the addressable range of the global pointer. -mgpopt is equivalent to -mgpopt=local, and -mno-gpopt is equivalent to -mgpopt=none. The default is -mgpopt except when -fpic or -fPIC is specified to generate position-independent code. Note that the Nios II ABI does not permit GP-relative accesses from shared libraries. You may need to specify -mno-gpopt explicitly when building programs that include large amounts of small data, including large GOT data sections. In this case, the 16-bit offset for GP-relative addressing may not be large enough to allow access to the entire small data section. -mgprel-sec=regexp This option specifies additional section names that can be accessed via GP-relative addressing. It is most useful in conjunction with "section" attributes on variable declarations and a custom linker script. The regexp is a POSIX Extended Regular Expression. This option does not affect the behavior of the -G option, and the specified sections are in addition to the standard ".sdata" and ".sbss" small-data sections that are recognized by -mgpopt. -mr0rel-sec=regexp This option specifies names of sections that can be accessed via a 16-bit offset from "r0"; that is, in the low 32K or high 32K of the 32-bit address space. It is most useful in conjunction with "section" attributes on variable declarations and a custom linker script. The regexp is a POSIX Extended Regular Expression. In contrast to the use of GP-relative addressing for small data, zero-based addressing is never generated by default and there are no conventional section names used in standard linker scripts for sections in the low or high areas of memory. -mel -meb Generate little-endian (default) or big-endian (experimental) code, respectively. -march=arch This specifies the name of the target Nios II architecture. GCC uses this name to determine what kind of instructions it can emit when generating assembly code. Permissible names are: r1, r2. The preprocessor macro "__nios2_arch__" is available to programs, with value 1 or 2, indicating the targeted ISA level. -mbypass-cache -mno-bypass-cache Force all load and store instructions to always bypass cache by using I/O variants of the instructions. The default is not to bypass the cache. -mno-cache-volatile -mcache-volatile Volatile memory access bypass the cache using the I/O variants of the load and store instructions. The default is not to bypass the cache. -mno-fast-sw-div -mfast-sw-div Do not use table-based fast divide for small numbers. The default is to use the fast divide at -O3 and above. -mno-hw-mul -mhw-mul -mno-hw-mulx -mhw-mulx -mno-hw-div -mhw-div Enable or disable emitting "mul", "mulx" and "div" family of instructions by the compiler. The default is to emit "mul" and not emit "div" and "mulx". -mbmx -mno-bmx -mcdx -mno-cdx Enable or disable generation of Nios II R2 BMX (bit manipulation) and CDX (code density) instructions. Enabling these instructions also requires -march=r2. Since these instructions are optional extensions to the R2 architecture, the default is not to emit them. -mcustom-insn=N -mno-custom-insn Each -mcustom-insn=N option enables use of a custom instruction with encoding N when generating code that uses insn. For example, -mcustom-fadds=253 generates custom instruction 253 for single-precision floating-point add operations instead of the default behavior of using a library call. The following values of insn are supported. Except as otherwise noted, floating-point operations are expected to be implemented with normal IEEE 754 semantics and correspond directly to the C operators or the equivalent GCC built-in functions. Single-precision floating point: fadds, fsubs, fdivs, fmuls Binary arithmetic operations. fnegs Unary negation. fabss Unary absolute value. fcmpeqs, fcmpges, fcmpgts, fcmples, fcmplts, fcmpnes Comparison operations. fmins, fmaxs Floating-point minimum and maximum. These instructions are only generated if -ffinite-math-only is specified. fsqrts Unary square root operation. fcoss, fsins, ftans, fatans, fexps, flogs Floating-point trigonometric and exponential functions. These instructions are only generated if -funsafe-math-optimizations is also specified. Double-precision floating point: faddd, fsubd, fdivd, fmuld Binary arithmetic operations. fnegd Unary negation. fabsd Unary absolute value. fcmpeqd, fcmpged, fcmpgtd, fcmpled, fcmpltd, fcmpned Comparison operations. fmind, fmaxd Double-precision minimum and maximum. These instructions are only generated if -ffinite-math-only is specified. fsqrtd Unary square root operation. fcosd, fsind, ftand, fatand, fexpd, flogd Double-precision trigonometric and exponential functions. These instructions are only generated if -funsafe-math-optimizations is also specified. Conversions: fextsd Conversion from single precision to double precision. ftruncds Conversion from double precision to single precision. fixsi, fixsu, fixdi, fixdu Conversion from floating point to signed or unsigned integer types, with truncation towards zero. round Conversion from single-precision floating point to signed integer, rounding to the nearest integer and ties away from zero. This corresponds to the "__builtin_lroundf" function when -fno-math-errno is used. floatis, floatus, floatid, floatud Conversion from signed or unsigned integer types to floating-point types. In addition, all of the following transfer instructions for internal registers X and Y must be provided to use any of the double-precision floating-point instructions. Custom instructions taking two double-precision source operands expect the first operand in the 64-bit register X. The other operand (or only operand of a unary operation) is given to the custom arithmetic instruction with the least significant half in source register src1 and the most significant half in src2. A custom instruction that returns a double-precision result returns the most significant 32 bits in the destination register and the other half in 32-bit register Y. GCC automatically generates the necessary code sequences to write register X and/or read register Y when double-precision floating-point instructions are used. fwrx Write src1 into the least significant half of X and src2 into the most significant half of X. fwry Write src1 into Y. frdxhi, frdxlo Read the most or least (respectively) significant half of X and store it in dest. frdy Read the value of Y and store it into dest. Note that you can gain more local control over generation of Nios II custom instructions by using the "target("custom-insn=N")" and "target("no-custom-insn")" function attributes or pragmas. -mcustom-fpu-cfg=name This option enables a predefined, named set of custom instruction encodings (see -mcustom-insn above). Currently, the following sets are defined: -mcustom-fpu-cfg=60-1 is equivalent to: -mcustom-fmuls=252 -mcustom-fadds=253 -mcustom-fsubs=254 -fsingle-precision-constant -mcustom-fpu-cfg=60-2 is equivalent to: -mcustom-fmuls=252 -mcustom-fadds=253 -mcustom-fsubs=254 -mcustom-fdivs=255 -fsingle-precision-constant -mcustom-fpu-cfg=72-3 is equivalent to: -mcustom-floatus=243 -mcustom-fixsi=244 -mcustom-floatis=245 -mcustom-fcmpgts=246 -mcustom-fcmples=249 -mcustom-fcmpeqs=250 -mcustom-fcmpnes=251 -mcustom-fmuls=252 -mcustom-fadds=253 -mcustom-fsubs=254 -mcustom-fdivs=255 -fsingle-precision-constant Custom instruction assignments given by individual -mcustom-insn= options override those given by -mcustom-fpu-cfg=, regardless of the order of the options on the command line. Note that you can gain more local control over selection of a FPU configuration by using the "target("custom-fpu-cfg=name")" function attribute or pragma. These additional -m options are available for the Altera Nios II ELF (bare-metal) target: -mhal Link with HAL BSP. This suppresses linking with the GCC- provided C runtime startup and termination code, and is typically used in conjunction with -msys-crt0= to specify the location of the alternate startup code provided by the HAL BSP. -msmallc Link with a limited version of the C library, -lsmallc, rather than Newlib. -msys-crt0=startfile startfile is the file name of the startfile (crt0) to use when linking. This option is only useful in conjunction with -mhal. -msys-lib=systemlib systemlib is the library name of the library that provides low-level system calls required by the C library, e.g. "read" and "write". This option is typically used to link with a library provided by a HAL BSP. Nvidia PTX Options These options are defined for Nvidia PTX: -m32 -m64 Generate code for 32-bit or 64-bit ABI. -misa=ISA-string Generate code for given the specified PTX ISA (e.g. sm_35). ISA strings must be lower-case. Valid ISA strings include sm_30 and sm_35. The default ISA is sm_30. -mmainkernel Link in code for a __main kernel. This is for stand-alone instead of offloading execution. -moptimize Apply partitioned execution optimizations. This is the default when any level of optimization is selected. -msoft-stack Generate code that does not use ".local" memory directly for stack storage. Instead, a per-warp stack pointer is maintained explicitly. This enables variable-length stack allocation (with variable-length arrays or "alloca"), and when global memory is used for underlying storage, makes it possible to access automatic variables from other threads, or with atomic instructions. This code generation variant is used for OpenMP offloading, but the option is exposed on its own for the purpose of testing the compiler; to generate code suitable for linking into programs using OpenMP offloading, use option -mgomp. -muniform-simt Switch to code generation variant that allows to execute all threads in each warp, while maintaining memory state and side effects as if only one thread in each warp was active outside of OpenMP SIMD regions. All atomic operations and calls to runtime (malloc, free, vprintf) are conditionally executed (iff current lane index equals the master lane index), and the register being assigned is copied via a shuffle instruction from the master lane. Outside of SIMD regions lane 0 is the master; inside, each thread sees itself as the master. Shared memory array "int __nvptx_uni[]" stores all- zeros or all-ones bitmasks for each warp, indicating current mode (0 outside of SIMD regions). Each thread can bitwise- and the bitmask at position "tid.y" with current lane index to compute the master lane index. -mgomp Generate code for use in OpenMP offloading: enables -msoft-stack and -muniform-simt options, and selects corresponding multilib variant. OpenRISC Options These options are defined for OpenRISC: -mboard=name Configure a board specific runtime. This will be passed to the linker for newlib board library linking. The default is "or1ksim". -mnewlib For compatibility, it's always newlib for elf now. -mhard-div Generate code for hardware which supports divide instructions. This is the default. -mhard-mul Generate code for hardware which supports multiply instructions. This is the default. -mcmov Generate code for hardware which supports the conditional move ("l.cmov") instruction. -mror Generate code for hardware which supports rotate right instructions. -msext Generate code for hardware which supports sign-extension instructions. -msfimm Generate code for hardware which supports set flag immediate ("l.sf*i") instructions. -mshftimm Generate code for hardware which supports shift immediate related instructions (i.e. "l.srai", "l.srli", "l.slli", "1.rori"). Note, to enable generation of the "l.rori" instruction the -mror flag must also be specified. -msoft-div Generate code for hardware which requires divide instruction emulation. -msoft-mul Generate code for hardware which requires multiply instruction emulation. PDP-11 Options These options are defined for the PDP-11: -mfpu Use hardware FPP floating point. This is the default. (FIS floating point on the PDP-11/40 is not supported.) Implies -m45. -msoft-float Do not use hardware floating point. -mac0 Return floating-point results in ac0 (fr0 in Unix assembler syntax). -mno-ac0 Return floating-point results in memory. This is the default. -m40 Generate code for a PDP-11/40. Implies -msoft-float -mno-split. -m45 Generate code for a PDP-11/45. This is the default. -m10 Generate code for a PDP-11/10. Implies -msoft-float -mno-split. -mint16 -mno-int32 Use 16-bit "int". This is the default. -mint32 -mno-int16 Use 32-bit "int". -msplit Target has split instruction and data space. Implies -m45. -munix-asm Use Unix assembler syntax. -mdec-asm Use DEC assembler syntax. -mgnu-asm Use GNU assembler syntax. This is the default. -mlra Use the new LRA register allocator. By default, the old "reload" allocator is used. picoChip Options These -m options are defined for picoChip implementations: -mae=ae_type Set the instruction set, register set, and instruction scheduling parameters for array element type ae_type. Supported values for ae_type are ANY, MUL, and MAC. -mae=ANY selects a completely generic AE type. Code generated with this option runs on any of the other AE types. The code is not as efficient as it would be if compiled for a specific AE type, and some types of operation (e.g., multiplication) do not work properly on all types of AE. -mae=MUL selects a MUL AE type. This is the most useful AE type for compiled code, and is the default. -mae=MAC selects a DSP-style MAC AE. Code compiled with this option may suffer from poor performance of byte (char) manipulation, since the DSP AE does not provide hardware support for byte load/stores. -msymbol-as-address Enable the compiler to directly use a symbol name as an address in a load/store instruction, without first loading it into a register. Typically, the use of this option generates larger programs, which run faster than when the option isn't used. However, the results vary from program to program, so it is left as a user option, rather than being permanently enabled. -mno-inefficient-warnings Disables warnings about the generation of inefficient code. These warnings can be generated, for example, when compiling code that performs byte-level memory operations on the MAC AE type. The MAC AE has no hardware support for byte-level memory operations, so all byte load/stores must be synthesized from word load/store operations. This is inefficient and a warning is generated to indicate that you should rewrite the code to avoid byte operations, or to target an AE type that has the necessary hardware support. This option disables these warnings. PowerPC Options These are listed under RISC-V Options These command-line options are defined for RISC-V targets: -mbranch-cost=n Set the cost of branches to roughly n instructions. -mplt -mno-plt When generating PIC code, do or don't allow the use of PLTs. Ignored for non-PIC. The default is -mplt. -mabi=ABI-string Specify integer and floating-point calling convention. ABI- string contains two parts: the size of integer types and the registers used for floating-point types. For example -march=rv64ifd -mabi=lp64d means that long and pointers are 64-bit (implicitly defining int to be 32-bit), and that floating-point values up to 64 bits wide are passed in F registers. Contrast this with -march=rv64ifd -mabi=lp64f, which still allows the compiler to generate code that uses the F and D extensions but only allows floating-point values up to 32 bits long to be passed in registers; or -march=rv64ifd -mabi=lp64, in which no floating-point arguments will be passed in registers. The default for this argument is system dependent, users who want a specific calling convention should specify one explicitly. The valid calling conventions are: ilp32, ilp32f, ilp32d, lp64, lp64f, and lp64d. Some calling conventions are impossible to implement on some ISAs: for example, -march=rv32if -mabi=ilp32d is invalid because the ABI requires 64-bit values be passed in F registers, but F registers are only 32 bits wide. There is also the ilp32e ABI that can only be used with the rv32e architecture. This ABI is not well specified at present, and is subject to change. -mfdiv -mno-fdiv Do or don't use hardware floating-point divide and square root instructions. This requires the F or D extensions for floating-point registers. The default is to use them if the specified architecture has these instructions. -mdiv -mno-div Do or don't use hardware instructions for integer division. This requires the M extension. The default is to use them if the specified architecture has these instructions. -march=ISA-string Generate code for given RISC-V ISA (e.g. rv64im). ISA strings must be lower-case. Examples include rv64i, rv32g, rv32e, and rv32imaf. -mtune=processor-string Optimize the output for the given processor, specified by microarchitecture name. Permissible values for this option are: rocket, sifive-3-series, sifive-5-series, sifive-7-series, and size. When -mtune= is not specified, the default is rocket. The size choice is not intended for use by end-users. This is used when -Os is specified. It overrides the instruction cost info provided by -mtune=, but does not override the pipeline info. This helps reduce code size while still giving good performance. -mpreferred-stack-boundary=num Attempt to keep the stack boundary aligned to a 2 raised to num byte boundary. If -mpreferred-stack-boundary is not specified, the default is 4 (16 bytes or 128-bits). Warning: If you use this switch, then you must build all modules with the same value, including any libraries. This includes the system libraries and startup modules. -msmall-data-limit=n Put global and static data smaller than n bytes into a special section (on some targets). -msave-restore -mno-save-restore Do or don't use smaller but slower prologue and epilogue code that uses library function calls. The default is to use fast inline prologues and epilogues. -mstrict-align -mno-strict-align Do not or do generate unaligned memory accesses. The default is set depending on whether the processor we are optimizing for supports fast unaligned access or not. -mcmodel=medlow Generate code for the medium-low code model. The program and its statically defined symbols must lie within a single 2 GiB address range and must lie between absolute addresses -2 GiB and +2 GiB. Programs can be statically or dynamically linked. This is the default code model. -mcmodel=medany Generate code for the medium-any code model. The program and its statically defined symbols must be within any single 2 GiB address range. Programs can be statically or dynamically linked. -mexplicit-relocs -mno-exlicit-relocs Use or do not use assembler relocation operators when dealing with symbolic addresses. The alternative is to use assembler macros instead, which may limit optimization. -mrelax -mno-relax Take advantage of linker relaxations to reduce the number of instructions required to materialize symbol addresses. The default is to take advantage of linker relaxations. -memit-attribute -mno-emit-attribute Emit (do not emit) RISC-V attribute to record extra information into ELF objects. This feature requires at least binutils 2.32. RL78 Options -msim Links in additional target libraries to support operation within a simulator. -mmul=none -mmul=g10 -mmul=g13 -mmul=g14 -mmul=rl78 Specifies the type of hardware multiplication and division support to be used. The simplest is "none", which uses software for both multiplication and division. This is the default. The "g13" value is for the hardware multiply/divide peripheral found on the RL78/G13 (S2 core) targets. The "g14" value selects the use of the multiplication and division instructions supported by the RL78/G14 (S3 core) parts. The value "rl78" is an alias for "g14" and the value "mg10" is an alias for "none". In addition a C preprocessor macro is defined, based upon the setting of this option. Possible values are: "__RL78_MUL_NONE__", "__RL78_MUL_G13__" or "__RL78_MUL_G14__". -mcpu=g10 -mcpu=g13 -mcpu=g14 -mcpu=rl78 Specifies the RL78 core to target. The default is the G14 core, also known as an S3 core or just RL78. The G13 or S2 core does not have multiply or divide instructions, instead it uses a hardware peripheral for these operations. The G10 or S1 core does not have register banks, so it uses a different calling convention. If this option is set it also selects the type of hardware multiply support to use, unless this is overridden by an explicit -mmul=none option on the command line. Thus specifying -mcpu=g13 enables the use of the G13 hardware multiply peripheral and specifying -mcpu=g10 disables the use of hardware multiplications altogether. Note, although the RL78/G14 core is the default target, specifying -mcpu=g14 or -mcpu=rl78 on the command line does change the behavior of the toolchain since it also enables G14 hardware multiply support. If these options are not specified on the command line then software multiplication routines will be used even though the code targets the RL78 core. This is for backwards compatibility with older toolchains which did not have hardware multiply and divide support. In addition a C preprocessor macro is defined, based upon the setting of this option. Possible values are: "__RL78_G10__", "__RL78_G13__" or "__RL78_G14__". -mg10 -mg13 -mg14 -mrl78 These are aliases for the corresponding -mcpu= option. They are provided for backwards compatibility. -mallregs Allow the compiler to use all of the available registers. By default registers "r24..r31" are reserved for use in interrupt handlers. With this option enabled these registers can be used in ordinary functions as well. -m64bit-doubles -m32bit-doubles Make the "double" data type be 64 bits (-m64bit-doubles) or 32 bits (-m32bit-doubles) in size. The default is -m32bit-doubles. -msave-mduc-in-interrupts -mno-save-mduc-in-interrupts Specifies that interrupt handler functions should preserve the MDUC registers. This is only necessary if normal code might use the MDUC registers, for example because it performs multiplication and division operations. The default is to ignore the MDUC registers as this makes the interrupt handlers faster. The target option -mg13 needs to be passed for this to work as this feature is only available on the G13 target (S2 core). The MDUC registers will only be saved if the interrupt handler performs a multiplication or division operation or it calls another function. IBM RS/6000 and PowerPC Options These -m options are defined for the IBM RS/6000 and PowerPC: -mpowerpc-gpopt -mno-powerpc-gpopt -mpowerpc-gfxopt -mno-powerpc-gfxopt -mpowerpc64 -mno-powerpc64 -mmfcrf -mno-mfcrf -mpopcntb -mno-popcntb -mpopcntd -mno-popcntd -mfprnd -mno-fprnd -mcmpb -mno-cmpb -mmfpgpr -mno-mfpgpr -mhard-dfp -mno-hard-dfp You use these options to specify which instructions are available on the processor you are using. The default value of these options is determined when configuring GCC. Specifying the -mcpu=cpu_type overrides the specification of these options. We recommend you use the -mcpu=cpu_type option rather than the options listed above. Specifying -mpowerpc-gpopt allows GCC to use the optional PowerPC architecture instructions in the General Purpose group, including floating-point square root. Specifying -mpowerpc-gfxopt allows GCC to use the optional PowerPC architecture instructions in the Graphics group, including floating-point select. The -mmfcrf option allows GCC to generate the move from condition register field instruction implemented on the POWER4 processor and other processors that support the PowerPC V2.01 architecture. The -mpopcntb option allows GCC to generate the popcount and double-precision FP reciprocal estimate instruction implemented on the POWER5 processor and other processors that support the PowerPC V2.02 architecture. The -mpopcntd option allows GCC to generate the popcount instruction implemented on the POWER7 processor and other processors that support the PowerPC V2.06 architecture. The -mfprnd option allows GCC to generate the FP round to integer instructions implemented on the POWER5+ processor and other processors that support the PowerPC V2.03 architecture. The -mcmpb option allows GCC to generate the compare bytes instruction implemented on the POWER6 processor and other processors that support the PowerPC V2.05 architecture. The -mmfpgpr option allows GCC to generate the FP move to/from general-purpose register instructions implemented on the POWER6X processor and other processors that support the extended PowerPC V2.05 architecture. The -mhard-dfp option allows GCC to generate the decimal floating-point instructions implemented on some POWER processors. The -mpowerpc64 option allows GCC to generate the additional 64-bit instructions that are found in the full PowerPC64 architecture and to treat GPRs as 64-bit, doubleword quantities. GCC defaults to -mno-powerpc64. -mcpu=cpu_type Set architecture type, register usage, and instruction scheduling parameters for machine type cpu_type. Supported values for cpu_type are 401, 403, 405, 405fp, 440, 440fp, 464, 464fp, 476, 476fp, 505, 601, 602, 603, 603e, 604, 604e, 620, 630, 740, 7400, 7450, 750, 801, 821, 823, 860, 970, 8540, a2, e300c2, e300c3, e500mc, e500mc64, e5500, e6500, ec603e, G3, G4, G5, titan, power3, power4, power5, power5+, power6, power6x, power7, power8, power9, powerpc, powerpc64, powerpc64le, rs64, and native. -mcpu=powerpc, -mcpu=powerpc64, and -mcpu=powerpc64le specify pure 32-bit PowerPC (either endian), 64-bit big endian PowerPC and 64-bit little endian PowerPC architecture machine types, with an appropriate, generic processor model assumed for scheduling purposes. Specifying native as cpu type detects and selects the architecture option that corresponds to the host processor of the system performing the compilation. -mcpu=native has no effect if GCC does not recognize the processor. The other options specify a specific processor. Code generated under those options runs best on that processor, and may not run at all on others. The -mcpu options automatically enable or disable the following options: -maltivec -mfprnd -mhard-float -mmfcrf -mmultiple -mpopcntb -mpopcntd -mpowerpc64 -mpowerpc-gpopt -mpowerpc-gfxopt -mmulhw -mdlmzb -mmfpgpr -mvsx -mcrypto -mhtm -mpower8-fusion -mpower8-vector -mquad-memory -mquad-memory-atomic -mfloat128 -mfloat128-hardware The particular options set for any particular CPU varies between compiler versions, depending on what setting seems to produce optimal code for that CPU; it doesn't necessarily reflect the actual hardware's capabilities. If you wish to set an individual option to a particular value, you may specify it after the -mcpu option, like -mcpu=970 -mno-altivec. On AIX, the -maltivec and -mpowerpc64 options are not enabled or disabled by the -mcpu option at present because AIX does not have full support for these options. You may still enable or disable them individually if you're sure it'll work in your environment. -mtune=cpu_type Set the instruction scheduling parameters for machine type cpu_type, but do not set the architecture type or register usage, as -mcpu=cpu_type does. The same values for cpu_type are used for -mtune as for -mcpu. If both are specified, the code generated uses the architecture and registers set by -mcpu, but the scheduling parameters set by -mtune. -mcmodel=small Generate PowerPC64 code for the small model: The TOC is limited to 64k. -mcmodel=medium Generate PowerPC64 code for the medium model: The TOC and other static data may be up to a total of 4G in size. This is the default for 64-bit Linux. -mcmodel=large Generate PowerPC64 code for the large model: The TOC may be up to 4G in size. Other data and code is only limited by the 64-bit address space. -maltivec -mno-altivec Generate code that uses (does not use) AltiVec instructions, and also enable the use of built-in functions that allow more direct access to the AltiVec instruction set. You may also need to set -mabi=altivec to adjust the current ABI with AltiVec ABI enhancements. When -maltivec is used, the element order for AltiVec intrinsics such as "vec_splat", "vec_extract", and "vec_insert" match array element order corresponding to the endianness of the target. That is, element zero identifies the leftmost element in a vector register when targeting a big-endian platform, and identifies the rightmost element in a vector register when targeting a little-endian platform. -mvrsave -mno-vrsave Generate VRSAVE instructions when generating AltiVec code. -msecure-plt Generate code that allows ld and ld.so to build executables and shared libraries with non-executable ".plt" and ".got" sections. This is a PowerPC 32-bit SYSV ABI option. -mbss-plt Generate code that uses a BSS ".plt" section that ld.so fills in, and requires ".plt" and ".got" sections that are both writable and executable. This is a PowerPC 32-bit SYSV ABI option. -misel -mno-isel This switch enables or disables the generation of ISEL instructions. -mvsx -mno-vsx Generate code that uses (does not use) vector/scalar (VSX) instructions, and also enable the use of built-in functions that allow more direct access to the VSX instruction set. -mcrypto -mno-crypto Enable the use (disable) of the built-in functions that allow direct access to the cryptographic instructions that were added in version 2.07 of the PowerPC ISA. -mhtm -mno-htm Enable (disable) the use of the built-in functions that allow direct access to the Hardware Transactional Memory (HTM) instructions that were added in version 2.07 of the PowerPC ISA. -mpower8-fusion -mno-power8-fusion Generate code that keeps (does not keeps) some integer operations adjacent so that the instructions can be fused together on power8 and later processors. -mpower8-vector -mno-power8-vector Generate code that uses (does not use) the vector and scalar instructions that were added in version 2.07 of the PowerPC ISA. Also enable the use of built-in functions that allow more direct access to the vector instructions. -mquad-memory -mno-quad-memory Generate code that uses (does not use) the non-atomic quad word memory instructions. The -mquad-memory option requires use of 64-bit mode. -mquad-memory-atomic -mno-quad-memory-atomic Generate code that uses (does not use) the atomic quad word memory instructions. The -mquad-memory-atomic option requires use of 64-bit mode. -mfloat128 -mno-float128 Enable/disable the __float128 keyword for IEEE 128-bit floating point and use either software emulation for IEEE 128-bit floating point or hardware instructions. The VSX instruction set (-mvsx, -mcpu=power7, -mcpu=power8), or -mcpu=power9 must be enabled to use the IEEE 128-bit floating point support. The IEEE 128-bit floating point support only works on PowerPC Linux systems. The default for -mfloat128 is enabled on PowerPC Linux systems using the VSX instruction set, and disabled on other systems. If you use the ISA 3.0 instruction set (-mpower9-vector or -mcpu=power9) on a 64-bit system, the IEEE 128-bit floating point support will also enable the generation of ISA 3.0 IEEE 128-bit floating point instructions. Otherwise, if you do not specify to generate ISA 3.0 instructions or you are targeting a 32-bit big endian system, IEEE 128-bit floating point will be done with software emulation. -mfloat128-hardware -mno-float128-hardware Enable/disable using ISA 3.0 hardware instructions to support the __float128 data type. The default for -mfloat128-hardware is enabled on PowerPC Linux systems using the ISA 3.0 instruction set, and disabled on other systems. -m32 -m64 Generate code for 32-bit or 64-bit environments of Darwin and SVR4 targets (including GNU/Linux). The 32-bit environment sets int, long and pointer to 32 bits and generates code that runs on any PowerPC variant. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits, and generates code for PowerPC64, as for -mpowerpc64. -mfull-toc -mno-fp-in-toc -mno-sum-in-toc -mminimal-toc Modify generation of the TOC (Table Of Contents), which is created for every executable file. The -mfull-toc option is selected by default. In that case, GCC allocates at least one TOC entry for each unique non-automatic variable reference in your program. GCC also places floating-point constants in the TOC. However, only 16,384 entries are available in the TOC. If you receive a linker error message that saying you have overflowed the available TOC space, you can reduce the amount of TOC space used with the -mno-fp-in-toc and -mno-sum-in-toc options. -mno-fp-in-toc prevents GCC from putting floating- point constants in the TOC and -mno-sum-in-toc forces GCC to generate code to calculate the sum of an address and a constant at run time instead of putting that sum into the TOC. You may specify one or both of these options. Each causes GCC to produce very slightly slower and larger code at the expense of conserving TOC space. If you still run out of space in the TOC even when you specify both of these options, specify -mminimal-toc instead. This option causes GCC to make only one TOC entry for every file. When you specify this option, GCC produces code that is slower and larger but which uses extremely little TOC space. You may wish to use this option only on files that contain less frequently-executed code. -maix64 -maix32 Enable 64-bit AIX ABI and calling convention: 64-bit pointers, 64-bit "long" type, and the infrastructure needed to support them. Specifying -maix64 implies -mpowerpc64, while -maix32 disables the 64-bit ABI and implies -mno-powerpc64. GCC defaults to -maix32. -mxl-compat -mno-xl-compat Produce code that conforms more closely to IBM XL compiler semantics when using AIX-compatible ABI. Pass floating-point arguments to prototyped functions beyond the register save area (RSA) on the stack in addition to argument FPRs. Do not assume that most significant double in 128-bit long double value is properly rounded when comparing values and converting to double. Use XL symbol names for long double support routines. The AIX calling convention was extended but not initially documented to handle an obscure K&R C case of calling a function that takes the address of its arguments with fewer arguments than declared. IBM XL compilers access floating- point arguments that do not fit in the RSA from the stack when a subroutine is compiled without optimization. Because always storing floating-point arguments on the stack is inefficient and rarely needed, this option is not enabled by default and only is necessary when calling subroutines compiled by IBM XL compilers without optimization. -mpe Support IBM RS/6000 SP Parallel Environment (PE). Link an application written to use message passing with special startup code to enable the application to run. The system must have PE installed in the standard location (/usr/lpp/ppe.poe/), or the specs file must be overridden with the -specs= option to specify the appropriate directory location. The Parallel Environment does not support threads, so the -mpe option and the -pthread option are incompatible. -malign-natural -malign-power On AIX, 32-bit Darwin, and 64-bit PowerPC GNU/Linux, the option -malign-natural overrides the ABI-defined alignment of larger types, such as floating-point doubles, on their natural size-based boundary. The option -malign-power instructs GCC to follow the ABI-specified alignment rules. GCC defaults to the standard alignment defined in the ABI. On 64-bit Darwin, natural alignment is the default, and -malign-power is not supported. -msoft-float -mhard-float Generate code that does not use (uses) the floating-point register set. Software floating-point emulation is provided if you use the -msoft-float option, and pass the option to GCC when linking. -mmultiple -mno-multiple Generate code that uses (does not use) the load multiple word instructions and the store multiple word instructions. These instructions are generated by default on POWER systems, and not generated on PowerPC systems. Do not use -mmultiple on little-endian PowerPC systems, since those instructions do not work when the processor is in little-endian mode. The exceptions are PPC740 and PPC750 which permit these instructions in little-endian mode. -mupdate -mno-update Generate code that uses (does not use) the load or store instructions that update the base register to the address of the calculated memory location. These instructions are generated by default. If you use -mno-update, there is a small window between the time that the stack pointer is updated and the address of the previous frame is stored, which means code that walks the stack frame across interrupts or signals may get corrupted data. -mavoid-indexed-addresses -mno-avoid-indexed-addresses Generate code that tries to avoid (not avoid) the use of indexed load or store instructions. These instructions can incur a performance penalty on Power6 processors in certain situations, such as when stepping through large arrays that cross a 16M boundary. This option is enabled by default when targeting Power6 and disabled otherwise. -mfused-madd -mno-fused-madd Generate code that uses (does not use) the floating-point multiply and accumulate instructions. These instructions are generated by default if hardware floating point is used. The machine-dependent -mfused-madd option is now mapped to the machine-independent -ffp-contract=fast option, and -mno-fused-madd is mapped to -ffp-contract=off. -mmulhw -mno-mulhw Generate code that uses (does not use) the half-word multiply and multiply-accumulate instructions on the IBM 405, 440, 464 and 476 processors. These instructions are generated by default when targeting those processors. -mdlmzb -mno-dlmzb Generate code that uses (does not use) the string-search dlmzb instruction on the IBM 405, 440, 464 and 476 processors. This instruction is generated by default when targeting those processors. -mno-bit-align -mbit-align On System V.4 and embedded PowerPC systems do not (do) force structures and unions that contain bit-fields to be aligned to the base type of the bit-field. For example, by default a structure containing nothing but 8 "unsigned" bit-fields of length 1 is aligned to a 4-byte boundary and has a size of 4 bytes. By using -mno-bit-align, the structure is aligned to a 1-byte boundary and is 1 byte in size. -mno-strict-align -mstrict-align On System V.4 and embedded PowerPC systems do not (do) assume that unaligned memory references are handled by the system. -mrelocatable -mno-relocatable Generate code that allows (does not allow) a static executable to be relocated to a different address at run time. A simple embedded PowerPC system loader should relocate the entire contents of ".got2" and 4-byte locations listed in the ".fixup" section, a table of 32-bit addresses generated by this option. For this to work, all objects linked together must be compiled with -mrelocatable or -mrelocatable-lib. -mrelocatable code aligns the stack to an 8-byte boundary. -mrelocatable-lib -mno-relocatable-lib Like -mrelocatable, -mrelocatable-lib generates a ".fixup" section to allow static executables to be relocated at run time, but -mrelocatable-lib does not use the smaller stack alignment of -mrelocatable. Objects compiled with -mrelocatable-lib may be linked with objects compiled with any combination of the -mrelocatable options. -mno-toc -mtoc On System V.4 and embedded PowerPC systems do not (do) assume that register 2 contains a pointer to a global area pointing to the addresses used in the program. -mlittle -mlittle-endian On System V.4 and embedded PowerPC systems compile code for the processor in little-endian mode. The -mlittle-endian option is the same as -mlittle. -mbig -mbig-endian On System V.4 and embedded PowerPC systems compile code for the processor in big-endian mode. The -mbig-endian option is the same as -mbig. -mdynamic-no-pic On Darwin and Mac OS X systems, compile code so that it is not relocatable, but that its external references are relocatable. The resulting code is suitable for applications, but not shared libraries. -msingle-pic-base Treat the register used for PIC addressing as read-only, rather than loading it in the prologue for each function. The runtime system is responsible for initializing this register with an appropriate value before execution begins. -mprioritize-restricted-insns=priority This option controls the priority that is assigned to dispatch-slot restricted instructions during the second scheduling pass. The argument priority takes the value 0, 1, or 2 to assign no, highest, or second-highest (respectively) priority to dispatch-slot restricted instructions. -msched-costly-dep=dependence_type This option controls which dependences are considered costly by the target during instruction scheduling. The argument dependence_type takes one of the following values: no No dependence is costly. all All dependences are costly. true_store_to_load A true dependence from store to load is costly. store_to_load Any dependence from store to load is costly. number Any dependence for which the latency is greater than or equal to number is costly. -minsert-sched-nops=scheme This option controls which NOP insertion scheme is used during the second scheduling pass. The argument scheme takes one of the following values: no Don't insert NOPs. pad Pad with NOPs any dispatch group that has vacant issue slots, according to the scheduler's grouping. regroup_exact Insert NOPs to force costly dependent insns into separate groups. Insert exactly as many NOPs as needed to force an insn to a new group, according to the estimated processor grouping. number Insert NOPs to force costly dependent insns into separate groups. Insert number NOPs to force an insn to a new group. -mcall-sysv On System V.4 and embedded PowerPC systems compile code using calling conventions that adhere to the March 1995 draft of the System V Application Binary Interface, PowerPC processor supplement. This is the default unless you configured GCC using powerpc-*-eabiaix. -mcall-sysv-eabi -mcall-eabi Specify both -mcall-sysv and -meabi options. -mcall-sysv-noeabi Specify both -mcall-sysv and -mno-eabi options. -mcall-aixdesc On System V.4 and embedded PowerPC systems compile code for the AIX operating system. -mcall-linux On System V.4 and embedded PowerPC systems compile code for the Linux-based GNU system. -mcall-freebsd On System V.4 and embedded PowerPC systems compile code for the FreeBSD operating system. -mcall-netbsd On System V.4 and embedded PowerPC systems compile code for the NetBSD operating system. -mcall-openbsd On System V.4 and embedded PowerPC systems compile code for the OpenBSD operating system. -mtraceback=traceback_type Select the type of traceback table. Valid values for traceback_type are full, part, and no. -maix-struct-return Return all structures in memory (as specified by the AIX ABI). -msvr4-struct-return Return structures smaller than 8 bytes in registers (as specified by the SVR4 ABI). -mabi=abi-type Extend the current ABI with a particular extension, or remove such extension. Valid values are altivec, no-altivec, ibmlongdouble, ieeelongdouble, elfv1, elfv2. -mabi=ibmlongdouble Change the current ABI to use IBM extended-precision long double. This is not likely to work if your system defaults to using IEEE extended-precision long double. If you change the long double type from IEEE extended-precision, the compiler will issue a warning unless you use the -Wno-psabi option. Requires -mlong-double-128 to be enabled. -mabi=ieeelongdouble Change the current ABI to use IEEE extended-precision long double. This is not likely to work if your system defaults to using IBM extended-precision long double. If you change the long double type from IBM extended-precision, the compiler will issue a warning unless you use the -Wno-psabi option. Requires -mlong-double-128 to be enabled. -mabi=elfv1 Change the current ABI to use the ELFv1 ABI. This is the default ABI for big-endian PowerPC 64-bit Linux. Overriding the default ABI requires special system support and is likely to fail in spectacular ways. -mabi=elfv2 Change the current ABI to use the ELFv2 ABI. This is the default ABI for little-endian PowerPC 64-bit Linux. Overriding the default ABI requires special system support and is likely to fail in spectacular ways. -mgnu-attribute -mno-gnu-attribute Emit .gnu_attribute assembly directives to set tag/value pairs in a .gnu.attributes section that specify ABI variations in function parameters or return values. -mprototype -mno-prototype On System V.4 and embedded PowerPC systems assume that all calls to variable argument functions are properly prototyped. Otherwise, the compiler must insert an instruction before every non-prototyped call to set or clear bit 6 of the condition code register ("CR") to indicate whether floating- point values are passed in the floating-point registers in case the function takes variable arguments. With -mprototype, only calls to prototyped variable argument functions set or clear the bit. -msim On embedded PowerPC systems, assume that the startup module is called sim-crt0.o and that the standard C libraries are libsim.a and libc.a. This is the default for powerpc-*-eabisim configurations. -mmvme On embedded PowerPC systems, assume that the startup module is called crt0.o and the standard C libraries are libmvme.a and libc.a. -mads On embedded PowerPC systems, assume that the startup module is called crt0.o and the standard C libraries are libads.a and libc.a. -myellowknife On embedded PowerPC systems, assume that the startup module is called crt0.o and the standard C libraries are libyk.a and libc.a. -mvxworks On System V.4 and embedded PowerPC systems, specify that you are compiling for a VxWorks system. -memb On embedded PowerPC systems, set the "PPC_EMB" bit in the ELF flags header to indicate that eabi extended relocations are used. -meabi -mno-eabi On System V.4 and embedded PowerPC systems do (do not) adhere to the Embedded Applications Binary Interface (EABI), which is a set of modifications to the System V.4 specifications. Selecting -meabi means that the stack is aligned to an 8-byte boundary, a function "__eabi" is called from "main" to set up the EABI environment, and the -msdata option can use both "r2" and "r13" to point to two separate small data areas. Selecting -mno-eabi means that the stack is aligned to a 16-byte boundary, no EABI initialization function is called from "main", and the -msdata option only uses "r13" to point to a single small data area. The -meabi option is on by default if you configured GCC using one of the powerpc*-*-eabi* options. -msdata=eabi On System V.4 and embedded PowerPC systems, put small initialized "const" global and static data in the ".sdata2" section, which is pointed to by register "r2". Put small initialized non-"const" global and static data in the ".sdata" section, which is pointed to by register "r13". Put small uninitialized global and static data in the ".sbss" section, which is adjacent to the ".sdata" section. The -msdata=eabi option is incompatible with the -mrelocatable option. The -msdata=eabi option also sets the -memb option. -msdata=sysv On System V.4 and embedded PowerPC systems, put small global and static data in the ".sdata" section, which is pointed to by register "r13". Put small uninitialized global and static data in the ".sbss" section, which is adjacent to the ".sdata" section. The -msdata=sysv option is incompatible with the -mrelocatable option. -msdata=default -msdata On System V.4 and embedded PowerPC systems, if -meabi is used, compile code the same as -msdata=eabi, otherwise compile code the same as -msdata=sysv. -msdata=data On System V.4 and embedded PowerPC systems, put small global data in the ".sdata" section. Put small uninitialized global data in the ".sbss" section. Do not use register "r13" to address small data however. This is the default behavior unless other -msdata options are used. -msdata=none -mno-sdata On embedded PowerPC systems, put all initialized global and static data in the ".data" section, and all uninitialized data in the ".bss" section. -mreadonly-in-sdata Put read-only objects in the ".sdata" section as well. This is the default. -mblock-move-inline-limit=num Inline all block moves (such as calls to "memcpy" or structure copies) less than or equal to num bytes. The minimum value for num is 32 bytes on 32-bit targets and 64 bytes on 64-bit targets. The default value is target- specific. -mblock-compare-inline-limit=num Generate non-looping inline code for all block compares (such as calls to "memcmp" or structure compares) less than or equal to num bytes. If num is 0, all inline expansion (non- loop and loop) of block compare is disabled. The default value is target-specific. -mblock-compare-inline-loop-limit=num Generate an inline expansion using loop code for all block compares that are less than or equal to num bytes, but greater than the limit for non-loop inline block compare expansion. If the block length is not constant, at most num bytes will be compared before "memcmp" is called to compare the remainder of the block. The default value is target- specific. -mstring-compare-inline-limit=num Compare at most num string bytes with inline code. If the difference or end of string is not found at the end of the inline compare a call to "strcmp" or "strncmp" will take care of the rest of the comparison. The default is 64 bytes. -G num On embedded PowerPC systems, put global and static items less than or equal to num bytes into the small data or BSS sections instead of the normal data or BSS section. By default, num is 8. The -G num switch is also passed to the linker. All modules should be compiled with the same -G num value. -mregnames -mno-regnames On System V.4 and embedded PowerPC systems do (do not) emit register names in the assembly language output using symbolic forms. -mlongcall -mno-longcall By default assume that all calls are far away so that a longer and more expensive calling sequence is required. This is required for calls farther than 32 megabytes (33,554,432 bytes) from the current location. A short call is generated if the compiler knows the call cannot be that far away. This setting can be overridden by the "shortcall" function attribute, or by "#pragma longcall(0)". Some linkers are capable of detecting out-of-range calls and generating glue code on the fly. On these systems, long calls are unnecessary and generate slower code. As of this writing, the AIX linker can do this, as can the GNU linker for PowerPC/64. It is planned to add this feature to the GNU linker for 32-bit PowerPC systems as well. On PowerPC64 ELFv2 and 32-bit PowerPC systems with newer GNU linkers, GCC can generate long calls using an inline PLT call sequence (see -mpltseq). PowerPC with -mbss-plt and PowerPC64 ELFv1 (big-endian) do not support inline PLT calls. On Darwin/PPC systems, "#pragma longcall" generates "jbsr callee, L42", plus a branch island (glue code). The two target addresses represent the callee and the branch island. The Darwin/PPC linker prefers the first address and generates a "bl callee" if the PPC "bl" instruction reaches the callee directly; otherwise, the linker generates "bl L42" to call the branch island. The branch island is appended to the body of the calling function; it computes the full 32-bit address of the callee and jumps to it. On Mach-O (Darwin) systems, this option directs the compiler emit to the glue for every direct call, and the Darwin linker decides whether to use or discard it. In the future, GCC may ignore all longcall specifications when the linker is known to generate glue. -mpltseq -mno-pltseq Implement (do not implement) -fno-plt and long calls using an inline PLT call sequence that supports lazy linking and long calls to functions in dlopen'd shared libraries. Inline PLT calls are only supported on PowerPC64 ELFv2 and 32-bit PowerPC systems with newer GNU linkers, and are enabled by default if the support is detected when configuring GCC, and, in the case of 32-bit PowerPC, if GCC is configured with --enable-secureplt. -mpltseq code and -mbss-plt 32-bit PowerPC relocatable objects may not be linked together. -mtls-markers -mno-tls-markers Mark (do not mark) calls to "__tls_get_addr" with a relocation specifying the function argument. The relocation allows the linker to reliably associate function call with argument setup instructions for TLS optimization, which in turn allows GCC to better schedule the sequence. -mrecip -mno-recip This option enables use of the reciprocal estimate and reciprocal square root estimate instructions with additional Newton-Raphson steps to increase precision instead of doing a divide or square root and divide for floating-point arguments. You should use the -ffast-math option when using -mrecip (or at least -funsafe-math-optimizations, -ffinite-math-only, -freciprocal-math and -fno-trapping-math). Note that while the throughput of the sequence is generally higher than the throughput of the non- reciprocal instruction, the precision of the sequence can be decreased by up to 2 ulp (i.e. the inverse of 1.0 equals 0.99999994) for reciprocal square roots. -mrecip=opt This option controls which reciprocal estimate instructions may be used. opt is a comma-separated list of options, which may be preceded by a "!" to invert the option: all Enable all estimate instructions. default Enable the default instructions, equivalent to -mrecip. none Disable all estimate instructions, equivalent to -mno-recip. div Enable the reciprocal approximation instructions for both single and double precision. divf Enable the single-precision reciprocal approximation instructions. divd Enable the double-precision reciprocal approximation instructions. rsqrt Enable the reciprocal square root approximation instructions for both single and double precision. rsqrtf Enable the single-precision reciprocal square root approximation instructions. rsqrtd Enable the double-precision reciprocal square root approximation instructions. So, for example, -mrecip=all,!rsqrtd enables all of the reciprocal estimate instructions, except for the "FRSQRTE", "XSRSQRTEDP", and "XVRSQRTEDP" instructions which handle the double-precision reciprocal square root calculations. -mrecip-precision -mno-recip-precision Assume (do not assume) that the reciprocal estimate instructions provide higher-precision estimates than is mandated by the PowerPC ABI. Selecting -mcpu=power6, -mcpu=power7 or -mcpu=power8 automatically selects -mrecip-precision. The double-precision square root estimate instructions are not generated by default on low-precision machines, since they do not provide an estimate that converges after three steps. -mveclibabi=type Specifies the ABI type to use for vectorizing intrinsics using an external library. The only type supported at present is mass, which specifies to use IBM's Mathematical Acceleration Subsystem (MASS) libraries for vectorizing intrinsics using external libraries. GCC currently emits calls to "acosd2", "acosf4", "acoshd2", "acoshf4", "asind2", "asinf4", "asinhd2", "asinhf4", "atan2d2", "atan2f4", "atand2", "atanf4", "atanhd2", "atanhf4", "cbrtd2", "cbrtf4", "cosd2", "cosf4", "coshd2", "coshf4", "erfcd2", "erfcf4", "erfd2", "erff4", "exp2d2", "exp2f4", "expd2", "expf4", "expm1d2", "expm1f4", "hypotd2", "hypotf4", "lgammad2", "lgammaf4", "log10d2", "log10f4", "log1pd2", "log1pf4", "log2d2", "log2f4", "logd2", "logf4", "powd2", "powf4", "sind2", "sinf4", "sinhd2", "sinhf4", "sqrtd2", "sqrtf4", "tand2", "tanf4", "tanhd2", and "tanhf4" when generating code for power7. Both -ftree-vectorize and -funsafe-math-optimizations must also be enabled. The MASS libraries must be specified at link time. -mfriz -mno-friz Generate (do not generate) the "friz" instruction when the -funsafe-math-optimizations option is used to optimize rounding of floating-point values to 64-bit integer and back to floating point. The "friz" instruction does not return the same value if the floating-point number is too large to fit in an integer. -mpointers-to-nested-functions -mno-pointers-to-nested-functions Generate (do not generate) code to load up the static chain register ("r11") when calling through a pointer on AIX and 64-bit Linux systems where a function pointer points to a 3-word descriptor giving the function address, TOC value to be loaded in register "r2", and static chain value to be loaded in register "r11". The -mpointers-to-nested-functions is on by default. You cannot call through pointers to nested functions or pointers to functions compiled in other languages that use the static chain if you use -mno-pointers-to-nested-functions. -msave-toc-indirect -mno-save-toc-indirect Generate (do not generate) code to save the TOC value in the reserved stack location in the function prologue if the function calls through a pointer on AIX and 64-bit Linux systems. If the TOC value is not saved in the prologue, it is saved just before the call through the pointer. The -mno-save-toc-indirect option is the default. -mcompat-align-parm -mno-compat-align-parm Generate (do not generate) code to pass structure parameters with a maximum alignment of 64 bits, for compatibility with older versions of GCC. Older versions of GCC (prior to 4.9.0) incorrectly did not align a structure parameter on a 128-bit boundary when that structure contained a member requiring 128-bit alignment. This is corrected in more recent versions of GCC. This option may be used to generate code that is compatible with functions compiled with older versions of GCC. The -mno-compat-align-parm option is the default. -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset -mstack-protector-guard-symbol=symbol Generate stack protection code using canary at guard. Supported locations are global for global canary or tls for per-thread canary in the TLS block (the default with GNU libc version 2.4 or later). With the latter choice the options -mstack-protector-guard-reg=reg and -mstack-protector-guard-offset=offset furthermore specify which register to use as base register for reading the canary, and from what offset from that base register. The default for those is as specified in the relevant ABI. -mstack-protector-guard-symbol=symbol overrides the offset with a symbol reference to a canary in the TLS block. RX Options These command-line options are defined for RX targets: -m64bit-doubles -m32bit-doubles Make the "double" data type be 64 bits (-m64bit-doubles) or 32 bits (-m32bit-doubles) in size. The default is -m32bit-doubles. Note RX floating-point hardware only works on 32-bit values, which is why the default is -m32bit-doubles. -fpu -nofpu Enables (-fpu) or disables (-nofpu) the use of RX floating- point hardware. The default is enabled for the RX600 series and disabled for the RX200 series. Floating-point instructions are only generated for 32-bit floating-point values, however, so the FPU hardware is not used for doubles if the -m64bit-doubles option is used. Note If the -fpu option is enabled then -funsafe-math-optimizations is also enabled automatically. This is because the RX FPU instructions are themselves unsafe. -mcpu=name Selects the type of RX CPU to be targeted. Currently three types are supported, the generic RX600 and RX200 series hardware and the specific RX610 CPU. The default is RX600. The only difference between RX600 and RX610 is that the RX610 does not support the "MVTIPL" instruction. The RX200 series does not have a hardware floating-point unit and so -nofpu is enabled by default when this type is selected. -mbig-endian-data -mlittle-endian-data Store data (but not code) in the big-endian format. The default is -mlittle-endian-data, i.e. to store data in the little-endian format. -msmall-data-limit=N Specifies the maximum size in bytes of global and static variables which can be placed into the small data area. Using the small data area can lead to smaller and faster code, but the size of area is limited and it is up to the programmer to ensure that the area does not overflow. Also when the small data area is used one of the RX's registers (usually "r13") is reserved for use pointing to this area, so it is no longer available for use by the compiler. This could result in slower and/or larger code if variables are pushed onto the stack instead of being held in this register. Note, common variables (variables that have not been initialized) and constants are not placed into the small data area as they are assigned to other sections in the output executable. The default value is zero, which disables this feature. Note, this feature is not enabled by default with higher optimization levels (-O2 etc) because of the potentially detrimental effects of reserving a register. It is up to the programmer to experiment and discover whether this feature is of benefit to their program. See the description of the -mpid option for a description of how the actual register to hold the small data area pointer is chosen. -msim -mno-sim Use the simulator runtime. The default is to use the libgloss board-specific runtime. -mas100-syntax -mno-as100-syntax When generating assembler output use a syntax that is compatible with Renesas's AS100 assembler. This syntax can also be handled by the GAS assembler, but it has some restrictions so it is not generated by default. -mmax-constant-size=N Specifies the maximum size, in bytes, of a constant that can be used as an operand in a RX instruction. Although the RX instruction set does allow constants of up to 4 bytes in length to be used in instructions, a longer value equates to a longer instruction. Thus in some circumstances it can be beneficial to restrict the size of constants that are used in instructions. Constants that are too big are instead placed into a constant pool and referenced via register indirection. The value N can be between 0 and 4. A value of 0 (the default) or 4 means that constants of any size are allowed. -mrelax Enable linker relaxation. Linker relaxation is a process whereby the linker attempts to reduce the size of a program by finding shorter versions of various instructions. Disabled by default. -mint-register=N Specify the number of registers to reserve for fast interrupt handler functions. The value N can be between 0 and 4. A value of 1 means that register "r13" is reserved for the exclusive use of fast interrupt handlers. A value of 2 reserves "r13" and "r12". A value of 3 reserves "r13", "r12" and "r11", and a value of 4 reserves "r13" through "r10". A value of 0, the default, does not reserve any registers. -msave-acc-in-interrupts Specifies that interrupt handler functions should preserve the accumulator register. This is only necessary if normal code might use the accumulator register, for example because it performs 64-bit multiplications. The default is to ignore the accumulator as this makes the interrupt handlers faster. -mpid -mno-pid Enables the generation of position independent data. When enabled any access to constant data is done via an offset from a base address held in a register. This allows the location of constant data to be determined at run time without requiring the executable to be relocated, which is a benefit to embedded applications with tight memory constraints. Data that can be modified is not affected by this option. Note, using this feature reserves a register, usually "r13", for the constant data base address. This can result in slower and/or larger code, especially in complicated functions. The actual register chosen to hold the constant data base address depends upon whether the -msmall-data-limit and/or the -mint-register command-line options are enabled. Starting with register "r13" and proceeding downwards, registers are allocated first to satisfy the requirements of -mint-register, then -mpid and finally -msmall-data-limit. Thus it is possible for the small data area register to be "r8" if both -mint-register=4 and -mpid are specified on the command line. By default this feature is not enabled. The default can be restored via the -mno-pid command-line option. -mno-warn-multiple-fast-interrupts -mwarn-multiple-fast-interrupts Prevents GCC from issuing a warning message if it finds more than one fast interrupt handler when it is compiling a file. The default is to issue a warning for each extra fast interrupt handler found, as the RX only supports one such interrupt. -mallow-string-insns -mno-allow-string-insns Enables or disables the use of the string manipulation instructions "SMOVF", "SCMPU", "SMOVB", "SMOVU", "SUNTIL" "SWHILE" and also the "RMPA" instruction. These instructions may prefetch data, which is not safe to do if accessing an I/O register. (See section 12.2.7 of the RX62N Group User's Manual for more information). The default is to allow these instructions, but it is not possible for GCC to reliably detect all circumstances where a string instruction might be used to access an I/O register, so their use cannot be disabled automatically. Instead it is reliant upon the programmer to use the -mno-allow-string-insns option if their program accesses I/O space. When the instructions are enabled GCC defines the C preprocessor symbol "__RX_ALLOW_STRING_INSNS__", otherwise it defines the symbol "__RX_DISALLOW_STRING_INSNS__". -mjsr -mno-jsr Use only (or not only) "JSR" instructions to access functions. This option can be used when code size exceeds the range of "BSR" instructions. Note that -mno-jsr does not mean to not use "JSR" but instead means that any type of branch may be used. Note: The generic GCC command-line option -ffixed-reg has special significance to the RX port when used with the "interrupt" function attribute. This attribute indicates a function intended to process fast interrupts. GCC ensures that it only uses the registers "r10", "r11", "r12" and/or "r13" and only provided that the normal use of the corresponding registers have been restricted via the -ffixed-reg or -mint-register command-line options. S/390 and zSeries Options These are the -m options defined for the S/390 and zSeries architecture. -mhard-float -msoft-float Use (do not use) the hardware floating-point instructions and registers for floating-point operations. When -msoft-float is specified, functions in libgcc.a are used to perform floating-point operations. When -mhard-float is specified, the compiler generates IEEE floating-point instructions. This is the default. -mhard-dfp -mno-hard-dfp Use (do not use) the hardware decimal-floating-point instructions for decimal-floating-point operations. When -mno-hard-dfp is specified, functions in libgcc.a are used to perform decimal-floating-point operations. When -mhard-dfp is specified, the compiler generates decimal-floating-point hardware instructions. This is the default for -march=z9-ec or higher. -mlong-double-64 -mlong-double-128 These switches control the size of "long double" type. A size of 64 bits makes the "long double" type equivalent to the "double" type. This is the default. -mbackchain -mno-backchain Store (do not store) the address of the caller's frame as backchain pointer into the callee's stack frame. A backchain may be needed to allow debugging using tools that do not understand DWARF call frame information. When -mno-packed-stack is in effect, the backchain pointer is stored at the bottom of the stack frame; when -mpacked-stack is in effect, the backchain is placed into the topmost word of the 96/160 byte register save area. In general, code compiled with -mbackchain is call-compatible with code compiled with -mmo-backchain; however, use of the backchain for debugging purposes usually requires that the whole binary is built with -mbackchain. Note that the combination of -mbackchain, -mpacked-stack and -mhard-float is not supported. In order to build a linux kernel use -msoft-float. The default is to not maintain the backchain. -mpacked-stack -mno-packed-stack Use (do not use) the packed stack layout. When -mno-packed-stack is specified, the compiler uses the all fields of the 96/160 byte register save area only for their default purpose; unused fields still take up stack space. When -mpacked-stack is specified, register save slots are densely packed at the top of the register save area; unused space is reused for other purposes, allowing for more efficient use of the available stack space. However, when -mbackchain is also in effect, the topmost word of the save area is always used to store the backchain, and the return address register is always saved two words below the backchain. As long as the stack frame backchain is not used, code generated with -mpacked-stack is call-compatible with code generated with -mno-packed-stack. Note that some non-FSF releases of GCC 2.95 for S/390 or zSeries generated code that uses the stack frame backchain at run time, not just for debugging purposes. Such code is not call-compatible with code compiled with -mpacked-stack. Also, note that the combination of -mbackchain, -mpacked-stack and -mhard-float is not supported. In order to build a linux kernel use -msoft-float. The default is to not use the packed stack layout. -msmall-exec -mno-small-exec Generate (or do not generate) code using the "bras" instruction to do subroutine calls. This only works reliably if the total executable size does not exceed 64k. The default is to use the "basr" instruction instead, which does not have this limitation. -m64 -m31 When -m31 is specified, generate code compliant to the GNU/Linux for S/390 ABI. When -m64 is specified, generate code compliant to the GNU/Linux for zSeries ABI. This allows GCC in particular to generate 64-bit instructions. For the s390 targets, the default is -m31, while the s390x targets default to -m64. -mzarch -mesa When -mzarch is specified, generate code using the instructions available on z/Architecture. When -mesa is specified, generate code using the instructions available on ESA/390. Note that -mesa is not possible with -m64. When generating code compliant to the GNU/Linux for S/390 ABI, the default is -mesa. When generating code compliant to the GNU/Linux for zSeries ABI, the default is -mzarch. -mhtm -mno-htm The -mhtm option enables a set of builtins making use of instructions available with the transactional execution facility introduced with the IBM zEnterprise EC12 machine generation S/390 System z Built-in Functions. -mhtm is enabled by default when using -march=zEC12. -mvx -mno-vx When -mvx is specified, generate code using the instructions available with the vector extension facility introduced with the IBM z13 machine generation. This option changes the ABI for some vector type values with regard to alignment and calling conventions. In case vector type values are being used in an ABI-relevant context a GAS .gnu_attribute command will be added to mark the resulting binary with the ABI used. -mvx is enabled by default when using -march=z13. -mzvector -mno-zvector The -mzvector option enables vector language extensions and builtins using instructions available with the vector extension facility introduced with the IBM z13 machine generation. This option adds support for vector to be used as a keyword to define vector type variables and arguments. vector is only available when GNU extensions are enabled. It will not be expanded when requesting strict standard compliance e.g. with -std=c99. In addition to the GCC low- level builtins -mzvector enables a set of builtins added for compatibility with AltiVec-style implementations like Power and Cell. In order to make use of these builtins the header file vecintrin.h needs to be included. -mzvector is disabled by default. -mmvcle -mno-mvcle Generate (or do not generate) code using the "mvcle" instruction to perform block moves. When -mno-mvcle is specified, use a "mvc" loop instead. This is the default unless optimizing for size. -mdebug -mno-debug Print (or do not print) additional debug information when compiling. The default is to not print debug information. -march=cpu-type Generate code that runs on cpu-type, which is the name of a system representing a certain processor type. Possible values for cpu-type are z900/arch5, z990/arch6, z9-109, z9-ec/arch7, z10/arch8, z196/arch9, zEC12, z13/arch11, z14/arch12, and native. The default is -march=z900. Specifying native as cpu type can be used to select the best architecture option for the host processor. -march=native has no effect if GCC does not recognize the processor. -mtune=cpu-type Tune to cpu-type everything applicable about the generated code, except for the ABI and the set of available instructions. The list of cpu-type values is the same as for -march. The default is the value used for -march. -mtpf-trace -mno-tpf-trace Generate code that adds (does not add) in TPF OS specific branches to trace routines in the operating system. This option is off by default, even when compiling for the TPF OS. -mfused-madd -mno-fused-madd Generate code that uses (does not use) the floating-point multiply and accumulate instructions. These instructions are generated by default if hardware floating point is used. -mwarn-framesize=framesize Emit a warning if the current function exceeds the given frame size. Because this is a compile-time check it doesn't need to be a real problem when the program runs. It is intended to identify functions that most probably cause a stack overflow. It is useful to be used in an environment with limited stack size e.g. the linux kernel. -mwarn-dynamicstack Emit a warning if the function calls "alloca" or uses dynamically-sized arrays. This is generally a bad idea with a limited stack size. -mstack-guard=stack-guard -mstack-size=stack-size If these options are provided the S/390 back end emits additional instructions in the function prologue that trigger a trap if the stack size is stack-guard bytes above the stack-size (remember that the stack on S/390 grows downward). If the stack-guard option is omitted the smallest power of 2 larger than the frame size of the compiled function is chosen. These options are intended to be used to help debugging stack overflow problems. The additionally emitted code causes only little overhead and hence can also be used in production-like systems without greater performance degradation. The given values have to be exact powers of 2 and stack-size has to be greater than stack-guard without exceeding 64k. In order to be efficient the extra code makes the assumption that the stack starts at an address aligned to the value given by stack-size. The stack-guard option can only be used in conjunction with stack-size. -mhotpatch=pre-halfwords,post-halfwords If the hotpatch option is enabled, a "hot-patching" function prologue is generated for all functions in the compilation unit. The funtion label is prepended with the given number of two-byte NOP instructions (pre-halfwords, maximum 1000000). After the label, 2 * post-halfwords bytes are appended, using the largest NOP like instructions the architecture allows (maximum 1000000). If both arguments are zero, hotpatching is disabled. This option can be overridden for individual functions with the "hotpatch" attribute. Score Options These options are defined for Score implementations: -meb Compile code for big-endian mode. This is the default. -mel Compile code for little-endian mode. -mnhwloop Disable generation of "bcnz" instructions. -muls Enable generation of unaligned load and store instructions. -mmac Enable the use of multiply-accumulate instructions. Disabled by default. -mscore5 Specify the SCORE5 as the target architecture. -mscore5u Specify the SCORE5U of the target architecture. -mscore7 Specify the SCORE7 as the target architecture. This is the default. -mscore7d Specify the SCORE7D as the target architecture. SH Options These -m options are defined for the SH implementations: -m1 Generate code for the SH1. -m2 Generate code for the SH2. -m2e Generate code for the SH2e. -m2a-nofpu Generate code for the SH2a without FPU, or for a SH2a-FPU in such a way that the floating-point unit is not used. -m2a-single-only Generate code for the SH2a-FPU, in such a way that no double- precision floating-point operations are used. -m2a-single Generate code for the SH2a-FPU assuming the floating-point unit is in single-precision mode by default. -m2a Generate code for the SH2a-FPU assuming the floating-point unit is in double-precision mode by default. -m3 Generate code for the SH3. -m3e Generate code for the SH3e. -m4-nofpu Generate code for the SH4 without a floating-point unit. -m4-single-only Generate code for the SH4 with a floating-point unit that only supports single-precision arithmetic. -m4-single Generate code for the SH4 assuming the floating-point unit is in single-precision mode by default. -m4 Generate code for the SH4. -m4-100 Generate code for SH4-100. -m4-100-nofpu Generate code for SH4-100 in such a way that the floating- point unit is not used. -m4-100-single Generate code for SH4-100 assuming the floating-point unit is in single-precision mode by default. -m4-100-single-only Generate code for SH4-100 in such a way that no double- precision floating-point operations are used. -m4-200 Generate code for SH4-200. -m4-200-nofpu Generate code for SH4-200 without in such a way that the floating-point unit is not used. -m4-200-single Generate code for SH4-200 assuming the floating-point unit is in single-precision mode by default. -m4-200-single-only Generate code for SH4-200 in such a way that no double- precision floating-point operations are used. -m4-300 Generate code for SH4-300. -m4-300-nofpu Generate code for SH4-300 without in such a way that the floating-point unit is not used. -m4-300-single Generate code for SH4-300 in such a way that no double- precision floating-point operations are used. -m4-300-single-only Generate code for SH4-300 in such a way that no double- precision floating-point operations are used. -m4-340 Generate code for SH4-340 (no MMU, no FPU). -m4-500 Generate code for SH4-500 (no FPU). Passes -isa=sh4-nofpu to the assembler. -m4a-nofpu Generate code for the SH4al-dsp, or for a SH4a in such a way that the floating-point unit is not used. -m4a-single-only Generate code for the SH4a, in such a way that no double- precision floating-point operations are used. -m4a-single Generate code for the SH4a assuming the floating-point unit is in single-precision mode by default. -m4a Generate code for the SH4a. -m4al Same as -m4a-nofpu, except that it implicitly passes -dsp to the assembler. GCC doesn't generate any DSP instructions at the moment. -mb Compile code for the processor in big-endian mode. -ml Compile code for the processor in little-endian mode. -mdalign Align doubles at 64-bit boundaries. Note that this changes the calling conventions, and thus some functions from the standard C library do not work unless you recompile it first with -mdalign. -mrelax Shorten some address references at link time, when possible; uses the linker option -relax. -mbigtable Use 32-bit offsets in "switch" tables. The default is to use 16-bit offsets. -mbitops Enable the use of bit manipulation instructions on SH2A. -mfmovd Enable the use of the instruction "fmovd". Check -mdalign for alignment constraints. -mrenesas Comply with the calling conventions defined by Renesas. -mno-renesas Comply with the calling conventions defined for GCC before the Renesas conventions were available. This option is the default for all targets of the SH toolchain. -mnomacsave Mark the "MAC" register as call-clobbered, even if -mrenesas is given. -mieee -mno-ieee Control the IEEE compliance of floating-point comparisons, which affects the handling of cases where the result of a comparison is unordered. By default -mieee is implicitly enabled. If -ffinite-math-only is enabled -mno-ieee is implicitly set, which results in faster floating-point greater-equal and less-equal comparisons. The implicit settings can be overridden by specifying either -mieee or -mno-ieee. -minline-ic_invalidate Inline code to invalidate instruction cache entries after setting up nested function trampolines. This option has no effect if -musermode is in effect and the selected code generation option (e.g. -m4) does not allow the use of the "icbi" instruction. If the selected code generation option does not allow the use of the "icbi" instruction, and -musermode is not in effect, the inlined code manipulates the instruction cache address array directly with an associative write. This not only requires privileged mode at run time, but it also fails if the cache line had been mapped via the TLB and has become unmapped. -misize Dump instruction size and location in the assembly code. -mpadstruct This option is deprecated. It pads structures to multiple of 4 bytes, which is incompatible with the SH ABI. -matomic-model=model Sets the model of atomic operations and additional parameters as a comma separated list. For details on the atomic built- in functions see __atomic Builtins. The following models and parameters are supported: none Disable compiler generated atomic sequences and emit library calls for atomic operations. This is the default if the target is not "sh*-*-linux*". soft-gusa Generate GNU/Linux compatible gUSA software atomic sequences for the atomic built-in functions. The generated atomic sequences require additional support from the interrupt/exception handling code of the system and are only suitable for SH3* and SH4* single-core systems. This option is enabled by default when the target is "sh*-*-linux*" and SH3* or SH4*. When the target is SH4A, this option also partially utilizes the hardware atomic instructions "movli.l" and "movco.l" to create more efficient code, unless strict is specified. soft-tcb Generate software atomic sequences that use a variable in the thread control block. This is a variation of the gUSA sequences which can also be used on SH1* and SH2* targets. The generated atomic sequences require additional support from the interrupt/exception handling code of the system and are only suitable for single-core systems. When using this model, the gbr-offset= parameter has to be specified as well. soft-imask Generate software atomic sequences that temporarily disable interrupts by setting "SR.IMASK = 1111". This model works only when the program runs in privileged mode and is only suitable for single-core systems. Additional support from the interrupt/exception handling code of the system is not required. This model is enabled by default when the target is "sh*-*-linux*" and SH1* or SH2*. hard-llcs Generate hardware atomic sequences using the "movli.l" and "movco.l" instructions only. This is only available on SH4A and is suitable for multi-core systems. Since the hardware instructions support only 32 bit atomic variables access to 8 or 16 bit variables is emulated with 32 bit accesses. Code compiled with this option is also compatible with other software atomic model interrupt/exception handling systems if executed on an SH4A system. Additional support from the interrupt/exception handling code of the system is not required for this model. gbr-offset= This parameter specifies the offset in bytes of the variable in the thread control block structure that should be used by the generated atomic sequences when the soft-tcb model has been selected. For other models this parameter is ignored. The specified value must be an integer multiple of four and in the range 0-1020. strict This parameter prevents mixed usage of multiple atomic models, even if they are compatible, and makes the compiler generate atomic sequences of the specified model only. -mtas Generate the "tas.b" opcode for "__atomic_test_and_set". Notice that depending on the particular hardware and software configuration this can degrade overall performance due to the operand cache line flushes that are implied by the "tas.b" instruction. On multi-core SH4A processors the "tas.b" instruction must be used with caution since it can result in data corruption for certain cache configurations. -mprefergot When generating position-independent code, emit function calls using the Global Offset Table instead of the Procedure Linkage Table. -musermode -mno-usermode Don't allow (allow) the compiler generating privileged mode code. Specifying -musermode also implies -mno-inline-ic_invalidate if the inlined code would not work in user mode. -musermode is the default when the target is "sh*-*-linux*". If the target is SH1* or SH2* -musermode has no effect, since there is no user mode. -multcost=number Set the cost to assume for a multiply insn. -mdiv=strategy Set the division strategy to be used for integer division operations. strategy can be one of: call-div1 Calls a library function that uses the single-step division instruction "div1" to perform the operation. Division by zero calculates an unspecified result and does not trap. This is the default except for SH4, SH2A and SHcompact. call-fp Calls a library function that performs the operation in double precision floating point. Division by zero causes a floating-point exception. This is the default for SHcompact with FPU. Specifying this for targets that do not have a double precision FPU defaults to "call-div1". call-table Calls a library function that uses a lookup table for small divisors and the "div1" instruction with case distinction for larger divisors. Division by zero calculates an unspecified result and does not trap. This is the default for SH4. Specifying this for targets that do not have dynamic shift instructions defaults to "call-div1". When a division strategy has not been specified the default strategy is selected based on the current target. For SH2A the default strategy is to use the "divs" and "divu" instructions instead of library function calls. -maccumulate-outgoing-args Reserve space once for outgoing arguments in the function prologue rather than around each call. Generally beneficial for performance and size. Also needed for unwinding to avoid changing the stack frame around conditional code. -mdivsi3_libfunc=name Set the name of the library function used for 32-bit signed division to name. This only affects the name used in the call division strategies, and the compiler still expects the same sets of input/output/clobbered registers as if this option were not present. -mfixed-range=register-range Generate code treating the given register range as fixed registers. A fixed register is one that the register allocator cannot use. This is useful when compiling kernel code. A register range is specified as two registers separated by a dash. Multiple register ranges can be specified separated by a comma. -mbranch-cost=num Assume num to be the cost for a branch instruction. Higher numbers make the compiler try to generate more branch-free code if possible. If not specified the value is selected depending on the processor type that is being compiled for. -mzdcbranch -mno-zdcbranch Assume (do not assume) that zero displacement conditional branch instructions "bt" and "bf" are fast. If -mzdcbranch is specified, the compiler prefers zero displacement branch code sequences. This is enabled by default when generating code for SH4 and SH4A. It can be explicitly disabled by specifying -mno-zdcbranch. -mcbranch-force-delay-slot Force the usage of delay slots for conditional branches, which stuffs the delay slot with a "nop" if a suitable instruction cannot be found. By default this option is disabled. It can be enabled to work around hardware bugs as found in the original SH7055. -mfused-madd -mno-fused-madd Generate code that uses (does not use) the floating-point multiply and accumulate instructions. These instructions are generated by default if hardware floating point is used. The machine-dependent -mfused-madd option is now mapped to the machine-independent -ffp-contract=fast option, and -mno-fused-madd is mapped to -ffp-contract=off. -mfsca -mno-fsca Allow or disallow the compiler to emit the "fsca" instruction for sine and cosine approximations. The option -mfsca must be used in combination with -funsafe-math-optimizations. It is enabled by default when generating code for SH4A. Using -mno-fsca disables sine and cosine approximations even if -funsafe-math-optimizations is in effect. -mfsrra -mno-fsrra Allow or disallow the compiler to emit the "fsrra" instruction for reciprocal square root approximations. The option -mfsrra must be used in combination with -funsafe-math-optimizations and -ffinite-math-only. It is enabled by default when generating code for SH4A. Using -mno-fsrra disables reciprocal square root approximations even if -funsafe-math-optimizations and -ffinite-math-only are in effect. -mpretend-cmove Prefer zero-displacement conditional branches for conditional move instruction patterns. This can result in faster code on the SH4 processor. -mfdpic Generate code using the FDPIC ABI. Solaris 2 Options These -m options are supported on Solaris 2: -mclear-hwcap -mclear-hwcap tells the compiler to remove the hardware capabilities generated by the Solaris assembler. This is only necessary when object files use ISA extensions not supported by the current machine, but check at runtime whether or not to use them. -mimpure-text -mimpure-text, used in addition to -shared, tells the compiler to not pass -z text to the linker when linking a shared object. Using this option, you can link position- dependent code into a shared object. -mimpure-text suppresses the "relocations remain against allocatable but non-writable sections" linker error message. However, the necessary relocations trigger copy-on-write, and the shared object is not actually shared across processes. Instead of using -mimpure-text, you should compile all source code with -fpic or -fPIC. These switches are supported in addition to the above on Solaris 2: -pthreads This is a synonym for -pthread. SPARC Options These -m options are supported on the SPARC: -mno-app-regs -mapp-regs Specify -mapp-regs to generate output using the global registers 2 through 4, which the SPARC SVR4 ABI reserves for applications. Like the global register 1, each global register 2 through 4 is then treated as an allocable register that is clobbered by function calls. This is the default. To be fully SVR4 ABI-compliant at the cost of some performance loss, specify -mno-app-regs. You should compile libraries and system software with this option. -mflat -mno-flat With -mflat, the compiler does not generate save/restore instructions and uses a "flat" or single register window model. This model is compatible with the regular register window model. The local registers and the input registers (0--5) are still treated as "call-saved" registers and are saved on the stack as needed. With -mno-flat (the default), the compiler generates save/restore instructions (except for leaf functions). This is the normal operating mode. -mfpu -mhard-float Generate output containing floating-point instructions. This is the default. -mno-fpu -msoft-float Generate output containing library calls for floating point. Warning: the requisite libraries are not available for all SPARC targets. Normally the facilities of the machine's usual C compiler are used, but this cannot be done directly in cross-compilation. You must make your own arrangements to provide suitable library functions for cross-compilation. The embedded targets sparc-*-aout and sparclite-*-* do provide software floating-point support. -msoft-float changes the calling convention in the output file; therefore, it is only useful if you compile all of a program with this option. In particular, you need to compile libgcc.a, the library that comes with GCC, with -msoft-float in order for this to work. -mhard-quad-float Generate output containing quad-word (long double) floating- point instructions. -msoft-quad-float Generate output containing library calls for quad-word (long double) floating-point instructions. The functions called are those specified in the SPARC ABI. This is the default. As of this writing, there are no SPARC implementations that have hardware support for the quad-word floating-point instructions. They all invoke a trap handler for one of these instructions, and then the trap handler emulates the effect of the instruction. Because of the trap handler overhead, this is much slower than calling the ABI library routines. Thus the -msoft-quad-float option is the default. -mno-unaligned-doubles -munaligned-doubles Assume that doubles have 8-byte alignment. This is the default. With -munaligned-doubles, GCC assumes that doubles have 8-byte alignment only if they are contained in another type, or if they have an absolute address. Otherwise, it assumes they have 4-byte alignment. Specifying this option avoids some rare compatibility problems with code generated by other compilers. It is not the default because it results in a performance loss, especially for floating-point code. -muser-mode -mno-user-mode Do not generate code that can only run in supervisor mode. This is relevant only for the "casa" instruction emitted for the LEON3 processor. This is the default. -mfaster-structs -mno-faster-structs With -mfaster-structs, the compiler assumes that structures should have 8-byte alignment. This enables the use of pairs of "ldd" and "std" instructions for copies in structure assignment, in place of twice as many "ld" and "st" pairs. However, the use of this changed alignment directly violates the SPARC ABI. Thus, it's intended only for use on targets where the developer acknowledges that their resulting code is not directly in line with the rules of the ABI. -mstd-struct-return -mno-std-struct-return With -mstd-struct-return, the compiler generates checking code in functions returning structures or unions to detect size mismatches between the two sides of function calls, as per the 32-bit ABI. The default is -mno-std-struct-return. This option has no effect in 64-bit mode. -mlra -mno-lra Enable Local Register Allocation. This is the default for SPARC since GCC 7 so -mno-lra needs to be passed to get old Reload. -mcpu=cpu_type Set the instruction set, register set, and instruction scheduling parameters for machine type cpu_type. Supported values for cpu_type are v7, cypress, v8, supersparc, hypersparc, leon, leon3, leon3v7, sparclite, f930, f934, sparclite86x, sparclet, tsc701, v9, ultrasparc, ultrasparc3, niagara, niagara2, niagara3, niagara4, niagara7 and m8. Native Solaris and GNU/Linux toolchains also support the value native, which selects the best architecture option for the host processor. -mcpu=native has no effect if GCC does not recognize the processor. Default instruction scheduling parameters are used for values that select an architecture and not an implementation. These are v7, v8, sparclite, sparclet, v9. Here is a list of each supported architecture and their supported implementations. v7 cypress, leon3v7 v8 supersparc, hypersparc, leon, leon3 sparclite f930, f934, sparclite86x sparclet tsc701 v9 ultrasparc, ultrasparc3, niagara, niagara2, niagara3, niagara4, niagara7, m8 By default (unless configured otherwise), GCC generates code for the V7 variant of the SPARC architecture. With -mcpu=cypress, the compiler additionally optimizes it for the Cypress CY7C602 chip, as used in the SPARCStation/SPARCServer 3xx series. This is also appropriate for the older SPARCStation 1, 2, IPX etc. With -mcpu=v8, GCC generates code for the V8 variant of the SPARC architecture. The only difference from V7 code is that the compiler emits the integer multiply and integer divide instructions which exist in SPARC-V8 but not in SPARC-V7. With -mcpu=supersparc, the compiler additionally optimizes it for the SuperSPARC chip, as used in the SPARCStation 10, 1000 and 2000 series. With -mcpu=sparclite, GCC generates code for the SPARClite variant of the SPARC architecture. This adds the integer multiply, integer divide step and scan ("ffs") instructions which exist in SPARClite but not in SPARC-V7. With -mcpu=f930, the compiler additionally optimizes it for the Fujitsu MB86930 chip, which is the original SPARClite, with no FPU. With -mcpu=f934, the compiler additionally optimizes it for the Fujitsu MB86934 chip, which is the more recent SPARClite with FPU. With -mcpu=sparclet, GCC generates code for the SPARClet variant of the SPARC architecture. This adds the integer multiply, multiply/accumulate, integer divide step and scan ("ffs") instructions which exist in SPARClet but not in SPARC-V7. With -mcpu=tsc701, the compiler additionally optimizes it for the TEMIC SPARClet chip. With -mcpu=v9, GCC generates code for the V9 variant of the SPARC architecture. This adds 64-bit integer and floating- point move instructions, 3 additional floating-point condition code registers and conditional move instructions. With -mcpu=ultrasparc, the compiler additionally optimizes it for the Sun UltraSPARC I/II/IIi chips. With -mcpu=ultrasparc3, the compiler additionally optimizes it for the Sun UltraSPARC III/III+/IIIi/IIIi+/IV/IV+ chips. With -mcpu=niagara, the compiler additionally optimizes it for Sun UltraSPARC T1 chips. With -mcpu=niagara2, the compiler additionally optimizes it for Sun UltraSPARC T2 chips. With -mcpu=niagara3, the compiler additionally optimizes it for Sun UltraSPARC T3 chips. With -mcpu=niagara4, the compiler additionally optimizes it for Sun UltraSPARC T4 chips. With -mcpu=niagara7, the compiler additionally optimizes it for Oracle SPARC M7 chips. With -mcpu=m8, the compiler additionally optimizes it for Oracle M8 chips. -mtune=cpu_type Set the instruction scheduling parameters for machine type cpu_type, but do not set the instruction set or register set that the option -mcpu=cpu_type does. The same values for -mcpu=cpu_type can be used for -mtune=cpu_type, but the only useful values are those that select a particular CPU implementation. Those are cypress, supersparc, hypersparc, leon, leon3, leon3v7, f930, f934, sparclite86x, tsc701, ultrasparc, ultrasparc3, niagara, niagara2, niagara3, niagara4, niagara7 and m8. With native Solaris and GNU/Linux toolchains, native can also be used. -mv8plus -mno-v8plus With -mv8plus, GCC generates code for the SPARC-V8+ ABI. The difference from the V8 ABI is that the global and out registers are considered 64 bits wide. This is enabled by default on Solaris in 32-bit mode for all SPARC-V9 processors. -mvis -mno-vis With -mvis, GCC generates code that takes advantage of the UltraSPARC Visual Instruction Set extensions. The default is -mno-vis. -mvis2 -mno-vis2 With -mvis2, GCC generates code that takes advantage of version 2.0 of the UltraSPARC Visual Instruction Set extensions. The default is -mvis2 when targeting a cpu that supports such instructions, such as UltraSPARC-III and later. Setting -mvis2 also sets -mvis. -mvis3 -mno-vis3 With -mvis3, GCC generates code that takes advantage of version 3.0 of the UltraSPARC Visual Instruction Set extensions. The default is -mvis3 when targeting a cpu that supports such instructions, such as niagara-3 and later. Setting -mvis3 also sets -mvis2 and -mvis. -mvis4 -mno-vis4 With -mvis4, GCC generates code that takes advantage of version 4.0 of the UltraSPARC Visual Instruction Set extensions. The default is -mvis4 when targeting a cpu that supports such instructions, such as niagara-7 and later. Setting -mvis4 also sets -mvis3, -mvis2 and -mvis. -mvis4b -mno-vis4b With -mvis4b, GCC generates code that takes advantage of version 4.0 of the UltraSPARC Visual Instruction Set extensions, plus the additional VIS instructions introduced in the Oracle SPARC Architecture 2017. The default is -mvis4b when targeting a cpu that supports such instructions, such as m8 and later. Setting -mvis4b also sets -mvis4, -mvis3, -mvis2 and -mvis. -mcbcond -mno-cbcond With -mcbcond, GCC generates code that takes advantage of the UltraSPARC Compare-and-Branch-on-Condition instructions. The default is -mcbcond when targeting a CPU that supports such instructions, such as Niagara-4 and later. -mfmaf -mno-fmaf With -mfmaf, GCC generates code that takes advantage of the UltraSPARC Fused Multiply-Add Floating-point instructions. The default is -mfmaf when targeting a CPU that supports such instructions, such as Niagara-3 and later. -mfsmuld -mno-fsmuld With -mfsmuld, GCC generates code that takes advantage of the Floating-point Multiply Single to Double (FsMULd) instruction. The default is -mfsmuld when targeting a CPU supporting the architecture versions V8 or V9 with FPU except -mcpu=leon. -mpopc -mno-popc With -mpopc, GCC generates code that takes advantage of the UltraSPARC Population Count instruction. The default is -mpopc when targeting a CPU that supports such an instruction, such as Niagara-2 and later. -msubxc -mno-subxc With -msubxc, GCC generates code that takes advantage of the UltraSPARC Subtract-Extended-with-Carry instruction. The default is -msubxc when targeting a CPU that supports such an instruction, such as Niagara-7 and later. -mfix-at697f Enable the documented workaround for the single erratum of the Atmel AT697F processor (which corresponds to erratum #13 of the AT697E processor). -mfix-ut699 Enable the documented workarounds for the floating-point errata and the data cache nullify errata of the UT699 processor. -mfix-ut700 Enable the documented workaround for the back-to-back store errata of the UT699E/UT700 processor. -mfix-gr712rc Enable the documented workaround for the back-to-back store errata of the GR712RC processor. These -m options are supported in addition to the above on SPARC-V9 processors in 64-bit environments: -m32 -m64 Generate code for a 32-bit or 64-bit environment. The 32-bit environment sets int, long and pointer to 32 bits. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits. -mcmodel=which Set the code model to one of medlow The Medium/Low code model: 64-bit addresses, programs must be linked in the low 32 bits of memory. Programs can be statically or dynamically linked. medmid The Medium/Middle code model: 64-bit addresses, programs must be linked in the low 44 bits of memory, the text and data segments must be less than 2GB in size and the data segment must be located within 2GB of the text segment. medany The Medium/Anywhere code model: 64-bit addresses, programs may be linked anywhere in memory, the text and data segments must be less than 2GB in size and the data segment must be located within 2GB of the text segment. embmedany The Medium/Anywhere code model for embedded systems: 64-bit addresses, the text and data segments must be less than 2GB in size, both starting anywhere in memory (determined at link time). The global register %g4 points to the base of the data segment. Programs are statically linked and PIC is not supported. -mmemory-model=mem-model Set the memory model in force on the processor to one of default The default memory model for the processor and operating system. rmo Relaxed Memory Order pso Partial Store Order tso Total Store Order sc Sequential Consistency These memory models are formally defined in Appendix D of the SPARC-V9 architecture manual, as set in the processor's "PSTATE.MM" field. -mstack-bias -mno-stack-bias With -mstack-bias, GCC assumes that the stack pointer, and frame pointer if present, are offset by -2047 which must be added back when making stack frame references. This is the default in 64-bit mode. Otherwise, assume no such offset is present. SPU Options These -m options are supported on the SPU: -mwarn-reloc -merror-reloc The loader for SPU does not handle dynamic relocations. By default, GCC gives an error when it generates code that requires a dynamic relocation. -mno-error-reloc disables the error, -mwarn-reloc generates a warning instead. -msafe-dma -munsafe-dma Instructions that initiate or test completion of DMA must not be reordered with respect to loads and stores of the memory that is being accessed. With -munsafe-dma you must use the "volatile" keyword to protect memory accesses, but that can lead to inefficient code in places where the memory is known to not change. Rather than mark the memory as volatile, you can use -msafe-dma to tell the compiler to treat the DMA instructions as potentially affecting all memory. -mbranch-hints By default, GCC generates a branch hint instruction to avoid pipeline stalls for always-taken or probably-taken branches. A hint is not generated closer than 8 instructions away from its branch. There is little reason to disable them, except for debugging purposes, or to make an object a little bit smaller. -msmall-mem -mlarge-mem By default, GCC generates code assuming that addresses are never larger than 18 bits. With -mlarge-mem code is generated that assumes a full 32-bit address. -mstdmain By default, GCC links against startup code that assumes the SPU-style main function interface (which has an unconventional parameter list). With -mstdmain, GCC links your program against startup code that assumes a C99-style interface to "main", including a local copy of "argv" strings. -mfixed-range=register-range Generate code treating the given register range as fixed registers. A fixed register is one that the register allocator cannot use. This is useful when compiling kernel code. A register range is specified as two registers separated by a dash. Multiple register ranges can be specified separated by a comma. -mea32 -mea64 Compile code assuming that pointers to the PPU address space accessed via the "__ea" named address space qualifier are either 32 or 64 bits wide. The default is 32 bits. As this is an ABI-changing option, all object code in an executable must be compiled with the same setting. -maddress-space-conversion -mno-address-space-conversion Allow/disallow treating the "__ea" address space as superset of the generic address space. This enables explicit type casts between "__ea" and generic pointer as well as implicit conversions of generic pointers to "__ea" pointers. The default is to allow address space pointer conversions. -mcache-size=cache-size This option controls the version of libgcc that the compiler links to an executable and selects a software-managed cache for accessing variables in the "__ea" address space with a particular cache size. Possible options for cache-size are 8, 16, 32, 64 and 128. The default cache size is 64KB. -matomic-updates -mno-atomic-updates This option controls the version of libgcc that the compiler links to an executable and selects whether atomic updates to the software-managed cache of PPU-side variables are used. If you use atomic updates, changes to a PPU variable from SPU code using the "__ea" named address space qualifier do not interfere with changes to other PPU variables residing in the same cache line from PPU code. If you do not use atomic updates, such interference may occur; however, writing back cache lines is more efficient. The default behavior is to use atomic updates. -mdual-nops -mdual-nops=n By default, GCC inserts NOPs to increase dual issue when it expects it to increase performance. n can be a value from 0 to 10. A smaller n inserts fewer NOPs. 10 is the default, 0 is the same as -mno-dual-nops. Disabled with -Os. -mhint-max-nops=n Maximum number of NOPs to insert for a branch hint. A branch hint must be at least 8 instructions away from the branch it is affecting. GCC inserts up to n NOPs to enforce this, otherwise it does not generate the branch hint. -mhint-max-distance=n The encoding of the branch hint instruction limits the hint to be within 256 instructions of the branch it is affecting. By default, GCC makes sure it is within 125. -msafe-hints Work around a hardware bug that causes the SPU to stall indefinitely. By default, GCC inserts the "hbrp" instruction to make sure this stall won't happen. Options for System V These additional options are available on System V Release 4 for compatibility with other compilers on those systems: -G Create a shared object. It is recommended that -symbolic or -shared be used instead. -Qy Identify the versions of each tool used by the compiler, in a ".ident" assembler directive in the output. -Qn Refrain from adding ".ident" directives to the output file (this is the default). -YP,dirs Search the directories dirs, and no others, for libraries specified with -l. -Ym,dir Look in the directory dir to find the M4 preprocessor. The assembler uses this option. TILE-Gx Options These -m options are supported on the TILE-Gx: -mcmodel=small Generate code for the small model. The distance for direct calls is limited to 500M in either direction. PC-relative addresses are 32 bits. Absolute addresses support the full address range. -mcmodel=large Generate code for the large model. There is no limitation on call distance, pc-relative addresses, or absolute addresses. -mcpu=name Selects the type of CPU to be targeted. Currently the only supported type is tilegx. -m32 -m64 Generate code for a 32-bit or 64-bit environment. The 32-bit environment sets int, long, and pointer to 32 bits. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits. -mbig-endian -mlittle-endian Generate code in big/little endian mode, respectively. TILEPro Options These -m options are supported on the TILEPro: -mcpu=name Selects the type of CPU to be targeted. Currently the only supported type is tilepro. -m32 Generate code for a 32-bit environment, which sets int, long, and pointer to 32 bits. This is the only supported behavior so the flag is essentially ignored. V850 Options These -m options are defined for V850 implementations: -mlong-calls -mno-long-calls Treat all calls as being far away (near). If calls are assumed to be far away, the compiler always loads the function's address into a register, and calls indirect through the pointer. -mno-ep -mep Do not optimize (do optimize) basic blocks that use the same index pointer 4 or more times to copy pointer into the "ep" register, and use the shorter "sld" and "sst" instructions. The -mep option is on by default if you optimize. -mno-prolog-function -mprolog-function Do not use (do use) external functions to save and restore registers at the prologue and epilogue of a function. The external functions are slower, but use less code space if more than one function saves the same number of registers. The -mprolog-function option is on by default if you optimize. -mspace Try to make the code as small as possible. At present, this just turns on the -mep and -mprolog-function options. -mtda=n Put static or global variables whose size is n bytes or less into the tiny data area that register "ep" points to. The tiny data area can hold up to 256 bytes in total (128 bytes for byte references). -msda=n Put static or global variables whose size is n bytes or less into the small data area that register "gp" points to. The small data area can hold up to 64 kilobytes. -mzda=n Put static or global variables whose size is n bytes or less into the first 32 kilobytes of memory. -mv850 Specify that the target processor is the V850. -mv850e3v5 Specify that the target processor is the V850E3V5. The preprocessor constant "__v850e3v5__" is defined if this option is used. -mv850e2v4 Specify that the target processor is the V850E3V5. This is an alias for the -mv850e3v5 option. -mv850e2v3 Specify that the target processor is the V850E2V3. The preprocessor constant "__v850e2v3__" is defined if this option is used. -mv850e2 Specify that the target processor is the V850E2. The preprocessor constant "__v850e2__" is defined if this option is used. -mv850e1 Specify that the target processor is the V850E1. The preprocessor constants "__v850e1__" and "__v850e__" are defined if this option is used. -mv850es Specify that the target processor is the V850ES. This is an alias for the -mv850e1 option. -mv850e Specify that the target processor is the V850E. The preprocessor constant "__v850e__" is defined if this option is used. If neither -mv850 nor -mv850e nor -mv850e1 nor -mv850e2 nor -mv850e2v3 nor -mv850e3v5 are defined then a default target processor is chosen and the relevant __v850*__ preprocessor constant is defined. The preprocessor constants "__v850" and "__v851__" are always defined, regardless of which processor variant is the target. -mdisable-callt -mno-disable-callt This option suppresses generation of the "CALLT" instruction for the v850e, v850e1, v850e2, v850e2v3 and v850e3v5 flavors of the v850 architecture. This option is enabled by default when the RH850 ABI is in use (see -mrh850-abi), and disabled by default when the GCC ABI is in use. If "CALLT" instructions are being generated then the C preprocessor symbol "__V850_CALLT__" is defined. -mrelax -mno-relax Pass on (or do not pass on) the -mrelax command-line option to the assembler. -mlong-jumps -mno-long-jumps Disable (or re-enable) the generation of PC-relative jump instructions. -msoft-float -mhard-float Disable (or re-enable) the generation of hardware floating point instructions. This option is only significant when the target architecture is V850E2V3 or higher. If hardware floating point instructions are being generated then the C preprocessor symbol "__FPU_OK__" is defined, otherwise the symbol "__NO_FPU__" is defined. -mloop Enables the use of the e3v5 LOOP instruction. The use of this instruction is not enabled by default when the e3v5 architecture is selected because its use is still experimental. -mrh850-abi -mghs Enables support for the RH850 version of the V850 ABI. This is the default. With this version of the ABI the following rules apply: * Integer sized structures and unions are returned via a memory pointer rather than a register. * Large structures and unions (more than 8 bytes in size) are passed by value. * Functions are aligned to 16-bit boundaries. * The -m8byte-align command-line option is supported. * The -mdisable-callt command-line option is enabled by default. The -mno-disable-callt command-line option is not supported. When this version of the ABI is enabled the C preprocessor symbol "__V850_RH850_ABI__" is defined. -mgcc-abi Enables support for the old GCC version of the V850 ABI. With this version of the ABI the following rules apply: * Integer sized structures and unions are returned in register "r10". * Large structures and unions (more than 8 bytes in size) are passed by reference. * Functions are aligned to 32-bit boundaries, unless optimizing for size. * The -m8byte-align command-line option is not supported. * The -mdisable-callt command-line option is supported but not enabled by default. When this version of the ABI is enabled the C preprocessor symbol "__V850_GCC_ABI__" is defined. -m8byte-align -mno-8byte-align Enables support for "double" and "long long" types to be aligned on 8-byte boundaries. The default is to restrict the alignment of all objects to at most 4-bytes. When -m8byte-align is in effect the C preprocessor symbol "__V850_8BYTE_ALIGN__" is defined. -mbig-switch Generate code suitable for big switch tables. Use this option only if the assembler/linker complain about out of range branches within a switch table. -mapp-regs This option causes r2 and r5 to be used in the code generated by the compiler. This setting is the default. -mno-app-regs This option causes r2 and r5 to be treated as fixed registers. VAX Options These -m options are defined for the VAX: -munix Do not output certain jump instructions ("aobleq" and so on) that the Unix assembler for the VAX cannot handle across long ranges. -mgnu Do output those jump instructions, on the assumption that the GNU assembler is being used. -mg Output code for G-format floating-point numbers instead of D-format. Visium Options -mdebug A program which performs file I/O and is destined to run on an MCM target should be linked with this option. It causes the libraries libc.a and libdebug.a to be linked. The program should be run on the target under the control of the GDB remote debugging stub. -msim A program which performs file I/O and is destined to run on the simulator should be linked with option. This causes libraries libc.a and libsim.a to be linked. -mfpu -mhard-float Generate code containing floating-point instructions. This is the default. -mno-fpu -msoft-float Generate code containing library calls for floating-point. -msoft-float changes the calling convention in the output file; therefore, it is only useful if you compile all of a program with this option. In particular, you need to compile libgcc.a, the library that comes with GCC, with -msoft-float in order for this to work. -mcpu=cpu_type Set the instruction set, register set, and instruction scheduling parameters for machine type cpu_type. Supported values for cpu_type are mcm, gr5 and gr6. mcm is a synonym of gr5 present for backward compatibility. By default (unless configured otherwise), GCC generates code for the GR5 variant of the Visium architecture. With -mcpu=gr6, GCC generates code for the GR6 variant of the Visium architecture. The only difference from GR5 code is that the compiler will generate block move instructions. -mtune=cpu_type Set the instruction scheduling parameters for machine type cpu_type, but do not set the instruction set or register set that the option -mcpu=cpu_type would. -msv-mode Generate code for the supervisor mode, where there are no restrictions on the access to general registers. This is the default. -muser-mode Generate code for the user mode, where the access to some general registers is forbidden: on the GR5, registers r24 to r31 cannot be accessed in this mode; on the GR6, only registers r29 to r31 are affected. VMS Options These -m options are defined for the VMS implementations: -mvms-return-codes Return VMS condition codes from "main". The default is to return POSIX-style condition (e.g. error) codes. -mdebug-main=prefix Flag the first routine whose name starts with prefix as the main routine for the debugger. -mmalloc64 Default to 64-bit memory allocation routines. -mpointer-size=size Set the default size of pointers. Possible options for size are 32 or short for 32 bit pointers, 64 or long for 64 bit pointers, and no for supporting only 32 bit pointers. The later option disables "pragma pointer_size". VxWorks Options The options in this section are defined for all VxWorks targets. Options specific to the target hardware are listed with the other options for that target. -mrtp GCC can generate code for both VxWorks kernels and real time processes (RTPs). This option switches from the former to the latter. It also defines the preprocessor macro "__RTP__". -non-static Link an RTP executable against shared libraries rather than static libraries. The options -static and -shared can also be used for RTPs; -static is the default. -Bstatic -Bdynamic These options are passed down to the linker. They are defined for compatibility with Diab. -Xbind-lazy Enable lazy binding of function calls. This option is equivalent to -Wl,-z,now and is defined for compatibility with Diab. -Xbind-now Disable lazy binding of function calls. This option is the default and is defined for compatibility with Diab. x86 Options These -m options are defined for the x86 family of computers. -march=cpu-type Generate instructions for the machine type cpu-type. In contrast to -mtune=cpu-type, which merely tunes the generated code for the specified cpu-type, -march=cpu-type allows GCC to generate code that may not run at all on processors other than the one indicated. Specifying -march=cpu-type implies -mtune=cpu-type. The choices for cpu-type are: native This selects the CPU to generate code for at compilation time by determining the processor type of the compiling machine. Using -march=native enables all instruction subsets supported by the local machine (hence the result might not run on different machines). Using -mtune=native produces code optimized for the local machine under the constraints of the selected instruction set. x86-64 A generic CPU with 64-bit extensions. i386 Original Intel i386 CPU. i486 Intel i486 CPU. (No scheduling is implemented for this chip.) i586 pentium Intel Pentium CPU with no MMX support. lakemont Intel Lakemont MCU, based on Intel Pentium CPU. pentium-mmx Intel Pentium MMX CPU, based on Pentium core with MMX instruction set support. pentiumpro Intel Pentium Pro CPU. i686 When used with -march, the Pentium Pro instruction set is used, so the code runs on all i686 family chips. When used with -mtune, it has the same meaning as generic. pentium2 Intel Pentium II CPU, based on Pentium Pro core with MMX instruction set support. pentium3 pentium3m Intel Pentium III CPU, based on Pentium Pro core with MMX and SSE instruction set support. pentium-m Intel Pentium M; low-power version of Intel Pentium III CPU with MMX, SSE and SSE2 instruction set support. Used by Centrino notebooks. pentium4 pentium4m Intel Pentium 4 CPU with MMX, SSE and SSE2 instruction set support. prescott Improved version of Intel Pentium 4 CPU with MMX, SSE, SSE2 and SSE3 instruction set support. nocona Improved version of Intel Pentium 4 CPU with 64-bit extensions, MMX, SSE, SSE2 and SSE3 instruction set support. core2 Intel Core 2 CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. nehalem Intel Nehalem CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2 and POPCNT instruction set support. westmere Intel Westmere CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES and PCLMUL instruction set support. sandybridge Intel Sandy Bridge CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AES and PCLMUL instruction set support. ivybridge Intel Ivy Bridge CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AES, PCLMUL, FSGSBASE, RDRND and F16C instruction set support. haswell Intel Haswell CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2 and F16C instruction set support. broadwell Intel Broadwell CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED ADCX and PREFETCHW instruction set support. skylake Intel Skylake CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC and XSAVES instruction set support. bonnell Intel Bonnell CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. silvermont Intel Silvermont CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES, PREFETCHW, PCLMUL and RDRND instruction set support. goldmont Intel Goldmont CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES, PREFETCHW, PCLMUL, RDRND, XSAVE, XSAVEC, XSAVES, XSAVEOPT and FSGSBASE instruction set support. goldmont-plus Intel Goldmont Plus CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES, PREFETCHW, PCLMUL, RDRND, XSAVE, XSAVEC, XSAVES, XSAVEOPT, FSGSBASE, PTWRITE, RDPID, SGX and UMIP instruction set support. tremont Intel Tremont CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES, PREFETCHW, PCLMUL, RDRND, XSAVE, XSAVEC, XSAVES, XSAVEOPT, FSGSBASE, PTWRITE, RDPID, SGX, UMIP, GFNI-SSE, CLWB, MOVDIRI, MOVDIR64B, CLDEMOTE and WAITPKG instruction set support. knl Intel Knight's Landing CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, PREFETCHWT1, AVX512F, AVX512PF, AVX512ER and AVX512CD instruction set support. knm Intel Knights Mill CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, PREFETCHWT1, AVX512F, AVX512PF, AVX512ER, AVX512CD, AVX5124VNNIW, AVX5124FMAPS and AVX512VPOPCNTDQ instruction set support. skylake-avx512 Intel Skylake Server CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, CLWB, AVX512VL, AVX512BW, AVX512DQ and AVX512CD instruction set support. cannonlake Intel Cannonlake Server CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512VBMI, AVX512IFMA, SHA and UMIP instruction set support. icelake-client Intel Icelake Client CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512VBMI, AVX512IFMA, SHA, CLWB, UMIP, RDPID, GFNI, AVX512VBMI2, AVX512VPOPCNTDQ, AVX512BITALG, AVX512VNNI, VPCLMULQDQ, VAES instruction set support. icelake-server Intel Icelake Server CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512VBMI, AVX512IFMA, SHA, CLWB, UMIP, RDPID, GFNI, AVX512VBMI2, AVX512VPOPCNTDQ, AVX512BITALG, AVX512VNNI, VPCLMULQDQ, VAES, PCONFIG and WBNOINVD instruction set support. cascadelake Intel Cascadelake CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, CLWB, AVX512VL, AVX512BW, AVX512DQ, AVX512CD and AVX512VNNI instruction set support. tigerlake Intel Tigerlake CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512VBMI, AVX512IFMA, SHA, CLWB, UMIP, RDPID, GFNI, AVX512VBMI2, AVX512VPOPCNTDQ, AVX512BITALG, AVX512VNNI, VPCLMULQDQ, VAES, PCONFIG, WBNOINVD, MOVDIRI, MOVDIR64B and CLWB instruction set support. k6 AMD K6 CPU with MMX instruction set support. k6-2 k6-3 Improved versions of AMD K6 CPU with MMX and 3DNow! instruction set support. athlon athlon-tbird AMD Athlon CPU with MMX, 3dNOW!, enhanced 3DNow! and SSE prefetch instructions support. athlon-4 athlon-xp athlon-mp Improved AMD Athlon CPU with MMX, 3DNow!, enhanced 3DNow! and full SSE instruction set support. k8 opteron athlon64 athlon-fx Processors based on the AMD K8 core with x86-64 instruction set support, including the AMD Opteron, Athlon 64, and Athlon 64 FX processors. (This supersets MMX, SSE, SSE2, 3DNow!, enhanced 3DNow! and 64-bit instruction set extensions.) k8-sse3 opteron-sse3 athlon64-sse3 Improved versions of AMD K8 cores with SSE3 instruction set support. amdfam10 barcelona CPUs based on AMD Family 10h cores with x86-64 instruction set support. (This supersets MMX, SSE, SSE2, SSE3, SSE4A, 3DNow!, enhanced 3DNow!, ABM and 64-bit instruction set extensions.) bdver1 CPUs based on AMD Family 15h cores with x86-64 instruction set support. (This supersets FMA4, AVX, XOP, LWP, AES, PCL_MUL, CX16, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM and 64-bit instruction set extensions.) bdver2 AMD Family 15h core based CPUs with x86-64 instruction set support. (This supersets BMI, TBM, F16C, FMA, FMA4, AVX, XOP, LWP, AES, PCL_MUL, CX16, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM and 64-bit instruction set extensions.) bdver3 AMD Family 15h core based CPUs with x86-64 instruction set support. (This supersets BMI, TBM, F16C, FMA, FMA4, FSGSBASE, AVX, XOP, LWP, AES, PCL_MUL, CX16, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM and 64-bit instruction set extensions. bdver4 AMD Family 15h core based CPUs with x86-64 instruction set support. (This supersets BMI, BMI2, TBM, F16C, FMA, FMA4, FSGSBASE, AVX, AVX2, XOP, LWP, AES, PCL_MUL, CX16, MOVBE, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM and 64-bit instruction set extensions. znver1 AMD Family 17h core based CPUs with x86-64 instruction set support. (This supersets BMI, BMI2, F16C, FMA, FSGSBASE, AVX, AVX2, ADCX, RDSEED, MWAITX, SHA, CLZERO, AES, PCL_MUL, CX16, MOVBE, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM, XSAVEC, XSAVES, CLFLUSHOPT, POPCNT, and 64-bit instruction set extensions. znver2 AMD Family 17h core based CPUs with x86-64 instruction set support. (This supersets BMI, BMI2, ,CLWB, F16C, FMA, FSGSBASE, AVX, AVX2, ADCX, RDSEED, MWAITX, SHA, CLZERO, AES, PCL_MUL, CX16, MOVBE, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM, XSAVEC, XSAVES, CLFLUSHOPT, POPCNT, and 64-bit instruction set extensions.) btver1 CPUs based on AMD Family 14h cores with x86-64 instruction set support. (This supersets MMX, SSE, SSE2, SSE3, SSSE3, SSE4A, CX16, ABM and 64-bit instruction set extensions.) btver2 CPUs based on AMD Family 16h cores with x86-64 instruction set support. This includes MOVBE, F16C, BMI, AVX, PCL_MUL, AES, SSE4.2, SSE4.1, CX16, ABM, SSE4A, SSSE3, SSE3, SSE2, SSE, MMX and 64-bit instruction set extensions. winchip-c6 IDT WinChip C6 CPU, dealt in same way as i486 with additional MMX instruction set support. winchip2 IDT WinChip 2 CPU, dealt in same way as i486 with additional MMX and 3DNow! instruction set support. c3 VIA C3 CPU with MMX and 3DNow! instruction set support. (No scheduling is implemented for this chip.) c3-2 VIA C3-2 (Nehemiah/C5XL) CPU with MMX and SSE instruction set support. (No scheduling is implemented for this chip.) c7 VIA C7 (Esther) CPU with MMX, SSE, SSE2 and SSE3 instruction set support. (No scheduling is implemented for this chip.) samuel-2 VIA Eden Samuel 2 CPU with MMX and 3DNow! instruction set support. (No scheduling is implemented for this chip.) nehemiah VIA Eden Nehemiah CPU with MMX and SSE instruction set support. (No scheduling is implemented for this chip.) esther VIA Eden Esther CPU with MMX, SSE, SSE2 and SSE3 instruction set support. (No scheduling is implemented for this chip.) eden-x2 VIA Eden X2 CPU with x86-64, MMX, SSE, SSE2 and SSE3 instruction set support. (No scheduling is implemented for this chip.) eden-x4 VIA Eden X4 CPU with x86-64, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX and AVX2 instruction set support. (No scheduling is implemented for this chip.) nano Generic VIA Nano CPU with x86-64, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. (No scheduling is implemented for this chip.) nano-1000 VIA Nano 1xxx CPU with x86-64, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. (No scheduling is implemented for this chip.) nano-2000 VIA Nano 2xxx CPU with x86-64, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. (No scheduling is implemented for this chip.) nano-3000 VIA Nano 3xxx CPU with x86-64, MMX, SSE, SSE2, SSE3, SSSE3 and SSE4.1 instruction set support. (No scheduling is implemented for this chip.) nano-x2 VIA Nano Dual Core CPU with x86-64, MMX, SSE, SSE2, SSE3, SSSE3 and SSE4.1 instruction set support. (No scheduling is implemented for this chip.) nano-x4 VIA Nano Quad Core CPU with x86-64, MMX, SSE, SSE2, SSE3, SSSE3 and SSE4.1 instruction set support. (No scheduling is implemented for this chip.) geode AMD Geode embedded processor with MMX and 3DNow! instruction set support. -mtune=cpu-type Tune to cpu-type everything applicable about the generated code, except for the ABI and the set of available instructions. While picking a specific cpu-type schedules things appropriately for that particular chip, the compiler does not generate any code that cannot run on the default machine type unless you use a -march=cpu-type option. For example, if GCC is configured for i686-pc-linux-gnu then -mtune=pentium4 generates code that is tuned for Pentium 4 but still runs on i686 machines. The choices for cpu-type are the same as for -march. In addition, -mtune supports 2 extra choices for cpu-type: generic Produce code optimized for the most common IA32/AMD64/EM64T processors. If you know the CPU on which your code will run, then you should use the corresponding -mtune or -march option instead of -mtune=generic. But, if you do not know exactly what CPU users of your application will have, then you should use this option. As new processors are deployed in the marketplace, the behavior of this option will change. Therefore, if you upgrade to a newer version of GCC, code generation controlled by this option will change to reflect the processors that are most common at the time that version of GCC is released. There is no -march=generic option because -march indicates the instruction set the compiler can use, and there is no generic instruction set applicable to all processors. In contrast, -mtune indicates the processor (or, in this case, collection of processors) for which the code is optimized. intel Produce code optimized for the most current Intel processors, which are Haswell and Silvermont for this version of GCC. If you know the CPU on which your code will run, then you should use the corresponding -mtune or -march option instead of -mtune=intel. But, if you want your application performs better on both Haswell and Silvermont, then you should use this option. As new Intel processors are deployed in the marketplace, the behavior of this option will change. Therefore, if you upgrade to a newer version of GCC, code generation controlled by this option will change to reflect the most current Intel processors at the time that version of GCC is released. There is no -march=intel option because -march indicates the instruction set the compiler can use, and there is no common instruction set applicable to all processors. In contrast, -mtune indicates the processor (or, in this case, collection of processors) for which the code is optimized. -mcpu=cpu-type A deprecated synonym for -mtune. -mfpmath=unit Generate floating-point arithmetic for selected unit unit. The choices for unit are: 387 Use the standard 387 floating-point coprocessor present on the majority of chips and emulated otherwise. Code compiled with this option runs almost everywhere. The temporary results are computed in 80-bit precision instead of the precision specified by the type, resulting in slightly different results compared to most of other chips. See -ffloat-store for more detailed description. This is the default choice for non-Darwin x86-32 targets. sse Use scalar floating-point instructions present in the SSE instruction set. This instruction set is supported by Pentium III and newer chips, and in the AMD line by Athlon-4, Athlon XP and Athlon MP chips. The earlier version of the SSE instruction set supports only single- precision arithmetic, thus the double and extended- precision arithmetic are still done using 387. A later version, present only in Pentium 4 and AMD x86-64 chips, supports double-precision arithmetic too. For the x86-32 compiler, you must use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default. The resulting code should be considerably faster in the majority of cases and avoid the numerical instability problems of 387 code, but may break some existing code that expects temporaries to be 80 bits. This is the default choice for the x86-64 compiler, Darwin x86-32 targets, and the default choice for x86-32 targets with the SSE2 instruction set when -ffast-math is enabled. sse,387 sse+387 both Attempt to utilize both instruction sets at once. This effectively doubles the amount of available registers, and on chips with separate execution units for 387 and SSE the execution resources too. Use this option with care, as it is still experimental, because the GCC register allocator does not model separate functional units well, resulting in unstable performance. -masm=dialect Output assembly instructions using selected dialect. Also affects which dialect is used for basic "asm" and extended "asm". Supported choices (in dialect order) are att or intel. The default is att. Darwin does not support intel. -mieee-fp -mno-ieee-fp Control whether or not the compiler uses IEEE floating-point comparisons. These correctly handle the case where the result of a comparison is unordered. -m80387 -mhard-float Generate output containing 80387 instructions for floating point. -mno-80387 -msoft-float Generate output containing library calls for floating point. Warning: the requisite libraries are not part of GCC. Normally the facilities of the machine's usual C compiler are used, but this cannot be done directly in cross-compilation. You must make your own arrangements to provide suitable library functions for cross-compilation. On machines where a function returns floating-point results in the 80387 register stack, some floating-point opcodes may be emitted even if -msoft-float is used. -mno-fp-ret-in-387 Do not use the FPU registers for return values of functions. The usual calling convention has functions return values of types "float" and "double" in an FPU register, even if there is no FPU. The idea is that the operating system should emulate an FPU. The option -mno-fp-ret-in-387 causes such values to be returned in ordinary CPU registers instead. -mno-fancy-math-387 Some 387 emulators do not support the "sin", "cos" and "sqrt" instructions for the 387. Specify this option to avoid generating those instructions. This option is overridden when -march indicates that the target CPU always has an FPU and so the instruction does not need emulation. These instructions are not generated unless you also use the -funsafe-math-optimizations switch. -malign-double -mno-align-double Control whether GCC aligns "double", "long double", and "long long" variables on a two-word boundary or a one-word boundary. Aligning "double" variables on a two-word boundary produces code that runs somewhat faster on a Pentium at the expense of more memory. On x86-64, -malign-double is enabled by default. Warning: if you use the -malign-double switch, structures containing the above types are aligned differently than the published application binary interface specifications for the x86-32 and are not binary compatible with structures in code compiled without that switch. -m96bit-long-double -m128bit-long-double These switches control the size of "long double" type. The x86-32 application binary interface specifies the size to be 96 bits, so -m96bit-long-double is the default in 32-bit mode. Modern architectures (Pentium and newer) prefer "long double" to be aligned to an 8- or 16-byte boundary. In arrays or structures conforming to the ABI, this is not possible. So specifying -m128bit-long-double aligns "long double" to a 16-byte boundary by padding the "long double" with an additional 32-bit zero. In the x86-64 compiler, -m128bit-long-double is the default choice as its ABI specifies that "long double" is aligned on 16-byte boundary. Notice that neither of these options enable any extra precision over the x87 standard of 80 bits for a "long double". Warning: if you override the default value for your target ABI, this changes the size of structures and arrays containing "long double" variables, as well as modifying the function calling convention for functions taking "long double". Hence they are not binary-compatible with code compiled without that switch. -mlong-double-64 -mlong-double-80 -mlong-double-128 These switches control the size of "long double" type. A size of 64 bits makes the "long double" type equivalent to the "double" type. This is the default for 32-bit Bionic C library. A size of 128 bits makes the "long double" type equivalent to the "__float128" type. This is the default for 64-bit Bionic C library. Warning: if you override the default value for your target ABI, this changes the size of structures and arrays containing "long double" variables, as well as modifying the function calling convention for functions taking "long double". Hence they are not binary-compatible with code compiled without that switch. -malign-data=type Control how GCC aligns variables. Supported values for type are compat uses increased alignment value compatible uses GCC 4.8 and earlier, abi uses alignment value as specified by the psABI, and cacheline uses increased alignment value to match the cache line size. compat is the default. -mlarge-data-threshold=threshold When -mcmodel=medium is specified, data objects larger than threshold are placed in the large data section. This value must be the same across all objects linked into the binary, and defaults to 65535. -mrtd Use a different function-calling convention, in which functions that take a fixed number of arguments return with the "ret num" instruction, which pops their arguments while returning. This saves one instruction in the caller since there is no need to pop the arguments there. You can specify that an individual function is called with this calling sequence with the function attribute "stdcall". You can also override the -mrtd option by using the function attribute "cdecl". Warning: this calling convention is incompatible with the one normally used on Unix, so you cannot use it if you need to call libraries compiled with the Unix compiler. Also, you must provide function prototypes for all functions that take variable numbers of arguments (including "printf"); otherwise incorrect code is generated for calls to those functions. In addition, seriously incorrect code results if you call a function with too many arguments. (Normally, extra arguments are harmlessly ignored.) -mregparm=num Control how many registers are used to pass integer arguments. By default, no registers are used to pass arguments, and at most 3 registers can be used. You can control this behavior for a specific function by using the function attribute "regparm". Warning: if you use this switch, and num is nonzero, then you must build all modules with the same value, including any libraries. This includes the system libraries and startup modules. -msseregparm Use SSE register passing conventions for float and double arguments and return values. You can control this behavior for a specific function by using the function attribute "sseregparm". Warning: if you use this switch then you must build all modules with the same value, including any libraries. This includes the system libraries and startup modules. -mvect8-ret-in-mem Return 8-byte vectors in memory instead of MMX registers. This is the default on Solaris 8 and 9 and VxWorks to match the ABI of the Sun Studio compilers until version 12. Later compiler versions (starting with Studio 12 Update 1) follow the ABI used by other x86 targets, which is the default on Solaris 10 and later. Only use this option if you need to remain compatible with existing code produced by those previous compiler versions or older versions of GCC. -mpc32 -mpc64 -mpc80 Set 80387 floating-point precision to 32, 64 or 80 bits. When -mpc32 is specified, the significands of results of floating-point operations are rounded to 24 bits (single precision); -mpc64 rounds the significands of results of floating-point operations to 53 bits (double precision) and -mpc80 rounds the significands of results of floating-point operations to 64 bits (extended double precision), which is the default. When this option is used, floating-point operations in higher precisions are not available to the programmer without setting the FPU control word explicitly. Setting the rounding of floating-point operations to less than the default 80 bits can speed some programs by 2% or more. Note that some mathematical libraries assume that extended-precision (80-bit) floating-point operations are enabled by default; routines in such libraries could suffer significant loss of accuracy, typically through so-called "catastrophic cancellation", when this option is used to set the precision to less than extended precision. -mstackrealign Realign the stack at entry. On the x86, the -mstackrealign option generates an alternate prologue and epilogue that realigns the run-time stack if necessary. This supports mixing legacy codes that keep 4-byte stack alignment with modern codes that keep 16-byte stack alignment for SSE compatibility. See also the attribute "force_align_arg_pointer", applicable to individual functions. -mpreferred-stack-boundary=num Attempt to keep the stack boundary aligned to a 2 raised to num byte boundary. If -mpreferred-stack-boundary is not specified, the default is 4 (16 bytes or 128 bits). Warning: When generating code for the x86-64 architecture with SSE extensions disabled, -mpreferred-stack-boundary=3 can be used to keep the stack boundary aligned to 8 byte boundary. Since x86-64 ABI require 16 byte stack alignment, this is ABI incompatible and intended to be used in controlled environment where stack space is important limitation. This option leads to wrong code when functions compiled with 16 byte stack alignment (such as functions from a standard library) are called with misaligned stack. In this case, SSE instructions may lead to misaligned memory access traps. In addition, variable arguments are handled incorrectly for 16 byte aligned objects (including x87 long double and __int128), leading to wrong results. You must build all modules with -mpreferred-stack-boundary=3, including any libraries. This includes the system libraries and startup modules. -mincoming-stack-boundary=num Assume the incoming stack is aligned to a 2 raised to num byte boundary. If -mincoming-stack-boundary is not specified, the one specified by -mpreferred-stack-boundary is used. On Pentium and Pentium Pro, "double" and "long double" values should be aligned to an 8-byte boundary (see -malign-double) or suffer significant run time performance penalties. On Pentium III, the Streaming SIMD Extension (SSE) data type "__m128" may not work properly if it is not 16-byte aligned. To ensure proper alignment of this values on the stack, the stack boundary must be as aligned as that required by any value stored on the stack. Further, every function must be generated such that it keeps the stack aligned. Thus calling a function compiled with a higher preferred stack boundary from a function compiled with a lower preferred stack boundary most likely misaligns the stack. It is recommended that libraries that use callbacks always use the default setting. This extra alignment does consume extra stack space, and generally increases code size. Code that is sensitive to stack space usage, such as embedded systems and operating system kernels, may want to reduce the preferred alignment to -mpreferred-stack-boundary=2. -mmmx -msse -msse2 -msse3 -mssse3 -msse4 -msse4a -msse4.1 -msse4.2 -mavx -mavx2 -mavx512f -mavx512pf -mavx512er -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -msha -maes -mpclmul -mclflushopt -mclwb -mfsgsbase -mptwrite -mrdrnd -mf16c -mfma -mpconfig -mwbnoinvd -mfma4 -mprfchw -mrdpid -mprefetchwt1 -mrdseed -msgx -mxop -mlwp -m3dnow -m3dnowa -mpopcnt -mabm -madx -mbmi -mbmi2 -mlzcnt -mfxsr -mxsave -mxsaveopt -mxsavec -mxsaves -mrtm -mhle -mtbm -mmwaitx -mclzero -mpku -mavx512vbmi2 -mgfni -mvaes -mwaitpkg -mvpclmulqdq -mavx512bitalg -mmovdiri -mmovdir64b -mavx512vpopcntdq -mavx5124fmaps -mavx512vnni -mavx5124vnniw -mcldemote These switches enable the use of instructions in the MMX, SSE, SSE2, SSE3, SSSE3, SSE4, SSE4A, SSE4.1, SSE4.2, AVX, AVX2, AVX512F, AVX512PF, AVX512ER, AVX512CD, AVX512VL, AVX512BW, AVX512DQ, AVX512IFMA, AVX512VBMI, SHA, AES, PCLMUL, CLFLUSHOPT, CLWB, FSGSBASE, PTWRITE, RDRND, F16C, FMA, PCONFIG, WBNOINVD, FMA4, PREFETCHW, RDPID, PREFETCHWT1, RDSEED, SGX, XOP, LWP, 3DNow!, enhanced 3DNow!, POPCNT, ABM, ADX, BMI, BMI2, LZCNT, FXSR, XSAVE, XSAVEOPT, XSAVEC, XSAVES, RTM, HLE, TBM, MWAITX, CLZERO, PKU, AVX512VBMI2, GFNI, VAES, WAITPKG, VPCLMULQDQ, AVX512BITALG, MOVDIRI, MOVDIR64B, AVX512VPOPCNTDQ, AVX5124FMAPS, AVX512VNNI, AVX5124VNNIW, or CLDEMOTE extended instruction sets. Each has a corresponding -mno- option to disable use of these instructions. These extensions are also available as built-in functions: see x86 Built-in Functions, for details of the functions enabled and disabled by these switches. To generate SSE/SSE2 instructions automatically from floating-point code (as opposed to 387 instructions), see -mfpmath=sse. GCC depresses SSEx instructions when -mavx is used. Instead, it generates new AVX instructions or AVX equivalence for all SSEx instructions when needed. These options enable GCC to use these extended instructions in generated code, even without -mfpmath=sse. Applications that perform run-time CPU detection must compile separate files for each supported architecture, using the appropriate flags. In particular, the file containing the CPU detection code should be compiled without these options. -mdump-tune-features This option instructs GCC to dump the names of the x86 performance tuning features and default settings. The names can be used in -mtune-ctrl=feature-list. -mtune-ctrl=feature-list This option is used to do fine grain control of x86 code generation features. feature-list is a comma separated list of feature names. See also -mdump-tune-features. When specified, the feature is turned on if it is not preceded with ^, otherwise, it is turned off. -mtune-ctrl=feature- list is intended to be used by GCC developers. Using it may lead to code paths not covered by testing and can potentially result in compiler ICEs or runtime errors. -mno-default This option instructs GCC to turn off all tunable features. See also -mtune-ctrl=feature-list and -mdump-tune-features. -mcld This option instructs GCC to emit a "cld" instruction in the prologue of functions that use string instructions. String instructions depend on the DF flag to select between autoincrement or autodecrement mode. While the ABI specifies the DF flag to be cleared on function entry, some operating systems violate this specification by not clearing the DF flag in their exception dispatchers. The exception handler can be invoked with the DF flag set, which leads to wrong direction mode when string instructions are used. This option can be enabled by default on 32-bit x86 targets by configuring GCC with the --enable-cld configure option. Generation of "cld" instructions can be suppressed with the -mno-cld compiler option in this case. -mvzeroupper This option instructs GCC to emit a "vzeroupper" instruction before a transfer of control flow out of the function to minimize the AVX to SSE transition penalty as well as remove unnecessary "zeroupper" intrinsics. -mprefer-avx128 This option instructs GCC to use 128-bit AVX instructions instead of 256-bit AVX instructions in the auto-vectorizer. -mprefer-vector-width=opt This option instructs GCC to use opt-bit vector width in instructions instead of default on the selected platform. none No extra limitations applied to GCC other than defined by the selected platform. 128 Prefer 128-bit vector width for instructions. 256 Prefer 256-bit vector width for instructions. 512 Prefer 512-bit vector width for instructions. -mcx16 This option enables GCC to generate "CMPXCHG16B" instructions in 64-bit code to implement compare-and-exchange operations on 16-byte aligned 128-bit objects. This is useful for atomic updates of data structures exceeding one machine word in size. The compiler uses this instruction to implement __sync Builtins. However, for __atomic Builtins operating on 128-bit integers, a library call is always used. -msahf This option enables generation of "SAHF" instructions in 64-bit code. Early Intel Pentium 4 CPUs with Intel 64 support, prior to the introduction of Pentium 4 G1 step in December 2005, lacked the "LAHF" and "SAHF" instructions which are supported by AMD64. These are load and store instructions, respectively, for certain status flags. In 64-bit mode, the "SAHF" instruction is used to optimize "fmod", "drem", and "remainder" built-in functions; see Other Builtins for details. -mmovbe This option enables use of the "movbe" instruction to implement "__builtin_bswap32" and "__builtin_bswap64". -mshstk The -mshstk option enables shadow stack built-in functions from x86 Control-flow Enforcement Technology (CET). -mcrc32 This option enables built-in functions "__builtin_ia32_crc32qi", "__builtin_ia32_crc32hi", "__builtin_ia32_crc32si" and "__builtin_ia32_crc32di" to generate the "crc32" machine instruction. -mrecip This option enables use of "RCPSS" and "RSQRTSS" instructions (and their vectorized variants "RCPPS" and "RSQRTPS") with an additional Newton-Raphson step to increase precision instead of "DIVSS" and "SQRTSS" (and their vectorized variants) for single-precision floating-point arguments. These instructions are generated only when -funsafe-math-optimizations is enabled together with -ffinite-math-only and -fno-trapping-math. Note that while the throughput of the sequence is higher than the throughput of the non-reciprocal instruction, the precision of the sequence can be decreased by up to 2 ulp (i.e. the inverse of 1.0 equals 0.99999994). Note that GCC implements "1.0f/sqrtf(x)" in terms of "RSQRTSS" (or "RSQRTPS") already with -ffast-math (or the above option combination), and doesn't need -mrecip. Also note that GCC emits the above sequence with additional Newton-Raphson step for vectorized single-float division and vectorized "sqrtf(x)" already with -ffast-math (or the above option combination), and doesn't need -mrecip. -mrecip=opt This option controls which reciprocal estimate instructions may be used. opt is a comma-separated list of options, which may be preceded by a ! to invert the option: all Enable all estimate instructions. default Enable the default instructions, equivalent to -mrecip. none Disable all estimate instructions, equivalent to -mno-recip. div Enable the approximation for scalar division. vec-div Enable the approximation for vectorized division. sqrt Enable the approximation for scalar square root. vec-sqrt Enable the approximation for vectorized square root. So, for example, -mrecip=all,!sqrt enables all of the reciprocal approximations, except for square root. -mveclibabi=type Specifies the ABI type to use for vectorizing intrinsics using an external library. Supported values for type are svml for the Intel short vector math library and acml for the AMD math core library. To use this option, both -ftree-vectorize and -funsafe-math-optimizations have to be enabled, and an SVML or ACML ABI-compatible library must be specified at link time. GCC currently emits calls to "vmldExp2", "vmldLn2", "vmldLog102", "vmldPow2", "vmldTanh2", "vmldTan2", "vmldAtan2", "vmldAtanh2", "vmldCbrt2", "vmldSinh2", "vmldSin2", "vmldAsinh2", "vmldAsin2", "vmldCosh2", "vmldCos2", "vmldAcosh2", "vmldAcos2", "vmlsExp4", "vmlsLn4", "vmlsLog104", "vmlsPow4", "vmlsTanh4", "vmlsTan4", "vmlsAtan4", "vmlsAtanh4", "vmlsCbrt4", "vmlsSinh4", "vmlsSin4", "vmlsAsinh4", "vmlsAsin4", "vmlsCosh4", "vmlsCos4", "vmlsAcosh4" and "vmlsAcos4" for corresponding function type when -mveclibabi=svml is used, and "__vrd2_sin", "__vrd2_cos", "__vrd2_exp", "__vrd2_log", "__vrd2_log2", "__vrd2_log10", "__vrs4_sinf", "__vrs4_cosf", "__vrs4_expf", "__vrs4_logf", "__vrs4_log2f", "__vrs4_log10f" and "__vrs4_powf" for the corresponding function type when -mveclibabi=acml is used. -mabi=name Generate code for the specified calling convention. Permissible values are sysv for the ABI used on GNU/Linux and other systems, and ms for the Microsoft ABI. The default is to use the Microsoft ABI when targeting Microsoft Windows and the SysV ABI on all other systems. You can control this behavior for specific functions by using the function attributes "ms_abi" and "sysv_abi". -mforce-indirect-call Force all calls to functions to be indirect. This is useful when using Intel Processor Trace where it generates more precise timing information for function calls. -mmanual-endbr Insert ENDBR instruction at function entry only via the "cf_check" function attribute. This is useful when used with the option -fcf-protection=branch to control ENDBR insertion at the function entry. -mcall-ms2sysv-xlogues Due to differences in 64-bit ABIs, any Microsoft ABI function that calls a System V ABI function must consider RSI, RDI and XMM6-15 as clobbered. By default, the code for saving and restoring these registers is emitted inline, resulting in fairly lengthy prologues and epilogues. Using -mcall-ms2sysv-xlogues emits prologues and epilogues that use stubs in the static portion of libgcc to perform these saves and restores, thus reducing function size at the cost of a few extra instructions. -mtls-dialect=type Generate code to access thread-local storage using the gnu or gnu2 conventions. gnu is the conservative default; gnu2 is more efficient, but it may add compile- and run-time requirements that cannot be satisfied on all systems. -mpush-args -mno-push-args Use PUSH operations to store outgoing parameters. This method is shorter and usually equally fast as method using SUB/MOV operations and is enabled by default. In some cases disabling it may improve performance because of improved scheduling and reduced dependencies. -maccumulate-outgoing-args If enabled, the maximum amount of space required for outgoing arguments is computed in the function prologue. This is faster on most modern CPUs because of reduced dependencies, improved scheduling and reduced stack usage when the preferred stack boundary is not equal to 2. The drawback is a notable increase in code size. This switch implies -mno-push-args. -mthreads Support thread-safe exception handling on MinGW. Programs that rely on thread-safe exception handling must compile and link all code with the -mthreads option. When compiling, -mthreads defines -D_MT; when linking, it links in a special thread helper library -lmingwthrd which cleans up per-thread exception-handling data. -mms-bitfields -mno-ms-bitfields Enable/disable bit-field layout compatible with the native Microsoft Windows compiler. If "packed" is used on a structure, or if bit-fields are used, it may be that the Microsoft ABI lays out the structure differently than the way GCC normally does. Particularly when moving packed data between functions compiled with GCC and the native Microsoft compiler (either via function call or as data in a file), it may be necessary to access either format. This option is enabled by default for Microsoft Windows targets. This behavior can also be controlled locally by use of variable or type attributes. For more information, see x86 Variable Attributes and x86 Type Attributes. The Microsoft structure layout algorithm is fairly simple with the exception of the bit-field packing. The padding and alignment of members of structures and whether a bit-field can straddle a storage-unit boundary are determine by these rules: 1. Structure members are stored sequentially in the order in which they are declared: the first member has the lowest memory address and the last member the highest. 2. Every data object has an alignment requirement. The alignment requirement for all data except structures, unions, and arrays is either the size of the object or the current packing size (specified with either the "aligned" attribute or the "pack" pragma), whichever is less. For structures, unions, and arrays, the alignment requirement is the largest alignment requirement of its members. Every object is allocated an offset so that: offset % alignment_requirement == 0 3. Adjacent bit-fields are packed into the same 1-, 2-, or 4-byte allocation unit if the integral types are the same size and if the next bit-field fits into the current allocation unit without crossing the boundary imposed by the common alignment requirements of the bit-fields. MSVC interprets zero-length bit-fields in the following ways: 1. If a zero-length bit-field is inserted between two bit- fields that are normally coalesced, the bit-fields are not coalesced. For example: struct { unsigned long bf_1 : 12; unsigned long : 0; unsigned long bf_2 : 12; } t1; The size of "t1" is 8 bytes with the zero-length bit- field. If the zero-length bit-field were removed, "t1"'s size would be 4 bytes. 2. If a zero-length bit-field is inserted after a bit-field, "foo", and the alignment of the zero-length bit-field is greater than the member that follows it, "bar", "bar" is aligned as the type of the zero-length bit-field. For example: struct { char foo : 4; short : 0; char bar; } t2; struct { char foo : 4; short : 0; double bar; } t3; For "t2", "bar" is placed at offset 2, rather than offset 1. Accordingly, the size of "t2" is 4. For "t3", the zero-length bit-field does not affect the alignment of "bar" or, as a result, the size of the structure. Taking this into account, it is important to note the following: 1. If a zero-length bit-field follows a normal bit-field, the type of the zero-length bit-field may affect the alignment of the structure as whole. For example, "t2" has a size of 4 bytes, since the zero-length bit-field follows a normal bit-field, and is of type short. 2. Even if a zero-length bit-field is not followed by a normal bit-field, it may still affect the alignment of the structure: struct { char foo : 6; long : 0; } t4; Here, "t4" takes up 4 bytes. 3. Zero-length bit-fields following non-bit-field members are ignored: struct { char foo; long : 0; char bar; } t5; Here, "t5" takes up 2 bytes. -mno-align-stringops Do not align the destination of inlined string operations. This switch reduces code size and improves performance in case the destination is already aligned, but GCC doesn't know about it. -minline-all-stringops By default GCC inlines string operations only when the destination is known to be aligned to least a 4-byte boundary. This enables more inlining and increases code size, but may improve performance of code that depends on fast "memcpy", "strlen", and "memset" for short lengths. -minline-stringops-dynamically For string operations of unknown size, use run-time checks with inline code for small blocks and a library call for large blocks. -mstringop-strategy=alg Override the internal decision heuristic for the particular algorithm to use for inlining string operations. The allowed values for alg are: rep_byte rep_4byte rep_8byte Expand using i386 "rep" prefix of the specified size. byte_loop loop unrolled_loop Expand into an inline loop. libcall Always use a library call. -mmemcpy-strategy=strategy Override the internal decision heuristic to decide if "__builtin_memcpy" should be inlined and what inline algorithm to use when the expected size of the copy operation is known. strategy is a comma-separated list of alg:max_size:dest_align triplets. alg is specified in -mstringop-strategy, max_size specifies the max byte size with which inline algorithm alg is allowed. For the last triplet, the max_size must be "-1". The max_size of the triplets in the list must be specified in increasing order. The minimal byte size for alg is 0 for the first triplet and "max_size + 1" of the preceding range. -mmemset-strategy=strategy The option is similar to -mmemcpy-strategy= except that it is to control "__builtin_memset" expansion. -momit-leaf-frame-pointer Don't keep the frame pointer in a register for leaf functions. This avoids the instructions to save, set up, and restore frame pointers and makes an extra register available in leaf functions. The option -fomit-leaf-frame-pointer removes the frame pointer for leaf functions, which might make debugging harder. -mtls-direct-seg-refs -mno-tls-direct-seg-refs Controls whether TLS variables may be accessed with offsets from the TLS segment register (%gs for 32-bit, %fs for 64-bit), or whether the thread base pointer must be added. Whether or not this is valid depends on the operating system, and whether it maps the segment to cover the entire TLS area. For systems that use the GNU C Library, the default is on. -msse2avx -mno-sse2avx Specify that the assembler should encode SSE instructions with VEX prefix. The option -mavx turns this on by default. -mfentry -mno-fentry If profiling is active (-pg), put the profiling counter call before the prologue. Note: On x86 architectures the attribute "ms_hook_prologue" isn't possible at the moment for -mfentry and -pg. -mrecord-mcount -mno-record-mcount If profiling is active (-pg), generate a __mcount_loc section that contains pointers to each profiling call. This is useful for automatically patching and out calls. -mnop-mcount -mno-nop-mcount If profiling is active (-pg), generate the calls to the profiling functions as NOPs. This is useful when they should be patched in later dynamically. This is likely only useful together with -mrecord-mcount. -minstrument-return=type Instrument function exit in -pg -mfentry instrumented functions with call to specified function. This only instruments true returns ending with ret, but not sibling calls ending with jump. Valid types are none to not instrument, call to generate a call to __return__, or nop5 to generate a 5 byte nop. -mrecord-return -mno-record-return Generate a __return_loc section pointing to all return instrumentation code. -mfentry-name=name Set name of __fentry__ symbol called at function entry for -pg -mfentry functions. -mfentry-section=name Set name of section to record -mrecord-mcount calls (default __mcount_loc). -mskip-rax-setup -mno-skip-rax-setup When generating code for the x86-64 architecture with SSE extensions disabled, -mskip-rax-setup can be used to skip setting up RAX register when there are no variable arguments passed in vector registers. Warning: Since RAX register is used to avoid unnecessarily saving vector registers on stack when passing variable arguments, the impacts of this option are callees may waste some stack space, misbehave or jump to a random location. GCC 4.4 or newer don't have those issues, regardless the RAX register value. -m8bit-idiv -mno-8bit-idiv On some processors, like Intel Atom, 8-bit unsigned integer divide is much faster than 32-bit/64-bit integer divide. This option generates a run-time check. If both dividend and divisor are within range of 0 to 255, 8-bit unsigned integer divide is used instead of 32-bit/64-bit integer divide. -mavx256-split-unaligned-load -mavx256-split-unaligned-store Split 32-byte AVX unaligned load and store. -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset Generate stack protection code using canary at guard. Supported locations are global for global canary or tls for per-thread canary in the TLS block (the default). This option has effect only when -fstack-protector or -fstack-protector-all is specified. With the latter choice the options -mstack-protector-guard-reg=reg and -mstack-protector-guard-offset=offset furthermore specify which segment register (%fs or %gs) to use as base register for reading the canary, and from what offset from that base register. The default for those is as specified in the relevant ABI. -mgeneral-regs-only Generate code that uses only the general-purpose registers. This prevents the compiler from using floating-point, vector, mask and bound registers. -mindirect-branch=choice Convert indirect call and jump with choice. The default is keep, which keeps indirect call and jump unmodified. thunk converts indirect call and jump to call and return thunk. thunk-inline converts indirect call and jump to inlined call and return thunk. thunk-extern converts indirect call and jump to external call and return thunk provided in a separate object file. You can control this behavior for a specific function by using the function attribute "indirect_branch". Note that -mcmodel=large is incompatible with -mindirect-branch=thunk and -mindirect-branch=thunk-extern since the thunk function may not be reachable in the large code model. Note that -mindirect-branch=thunk-extern is compatible with -fcf-protection=branch since the external thunk can be made to enable control-flow check. -mfunction-return=choice Convert function return with choice. The default is keep, which keeps function return unmodified. thunk converts function return to call and return thunk. thunk-inline converts function return to inlined call and return thunk. thunk-extern converts function return to external call and return thunk provided in a separate object file. You can control this behavior for a specific function by using the function attribute "function_return". Note that -mindirect-return=thunk-extern is compatible with -fcf-protection=branch since the external thunk can be made to enable control-flow check. Note that -mcmodel=large is incompatible with -mfunction-return=thunk and -mfunction-return=thunk-extern since the thunk function may not be reachable in the large code model. -mindirect-branch-register Force indirect call and jump via register. These -m switches are supported in addition to the above on x86-64 processors in 64-bit environments. -m32 -m64 -mx32 -m16 -miamcu Generate code for a 16-bit, 32-bit or 64-bit environment. The -m32 option sets "int", "long", and pointer types to 32 bits, and generates code that runs on any i386 system. The -m64 option sets "int" to 32 bits and "long" and pointer types to 64 bits, and generates code for the x86-64 architecture. For Darwin only the -m64 option also turns off the -fno-pic and -mdynamic-no-pic options. The -mx32 option sets "int", "long", and pointer types to 32 bits, and generates code for the x86-64 architecture. The -m16 option is the same as -m32, except for that it outputs the ".code16gcc" assembly directive at the beginning of the assembly output so that the binary can run in 16-bit mode. The -miamcu option generates code which conforms to Intel MCU psABI. It requires the -m32 option to be turned on. -mno-red-zone Do not use a so-called "red zone" for x86-64 code. The red zone is mandated by the x86-64 ABI; it is a 128-byte area beyond the location of the stack pointer that is not modified by signal or interrupt handlers and therefore can be used for temporary data without adjusting the stack pointer. The flag -mno-red-zone disables this red zone. -mcmodel=small Generate code for the small code model: the program and its symbols must be linked in the lower 2 GB of the address space. Pointers are 64 bits. Programs can be statically or dynamically linked. This is the default code model. -mcmodel=kernel Generate code for the kernel code model. The kernel runs in the negative 2 GB of the address space. This model has to be used for Linux kernel code. -mcmodel=medium Generate code for the medium model: the program is linked in the lower 2 GB of the address space. Small symbols are also placed there. Symbols with sizes larger than -mlarge-data-threshold are put into large data or BSS sections and can be located above 2GB. Programs can be statically or dynamically linked. -mcmodel=large Generate code for the large model. This model makes no assumptions about addresses and sizes of sections. -maddress-mode=long Generate code for long address mode. This is only supported for 64-bit and x32 environments. It is the default address mode for 64-bit environments. -maddress-mode=short Generate code for short address mode. This is only supported for 32-bit and x32 environments. It is the default address mode for 32-bit and x32 environments. x86 Windows Options These additional options are available for Microsoft Windows targets: -mconsole This option specifies that a console application is to be generated, by instructing the linker to set the PE header subsystem type required for console applications. This option is available for Cygwin and MinGW targets and is enabled by default on those targets. -mdll This option is available for Cygwin and MinGW targets. It specifies that a DLL---a dynamic link library---is to be generated, enabling the selection of the required runtime startup object and entry point. -mnop-fun-dllimport This option is available for Cygwin and MinGW targets. It specifies that the "dllimport" attribute should be ignored. -mthread This option is available for MinGW targets. It specifies that MinGW-specific thread support is to be used. -municode This option is available for MinGW-w64 targets. It causes the "UNICODE" preprocessor macro to be predefined, and chooses Unicode-capable runtime startup code. -mwin32 This option is available for Cygwin and MinGW targets. It specifies that the typical Microsoft Windows predefined macros are to be set in the pre-processor, but does not influence the choice of runtime library/startup code. -mwindows This option is available for Cygwin and MinGW targets. It specifies that a GUI application is to be generated by instructing the linker to set the PE header subsystem type appropriately. -fno-set-stack-executable This option is available for MinGW targets. It specifies that the executable flag for the stack used by nested functions isn't set. This is necessary for binaries running in kernel mode of Microsoft Windows, as there the User32 API, which is used to set executable privileges, isn't available. -fwritable-relocated-rdata This option is available for MinGW and Cygwin targets. It specifies that relocated-data in read-only section is put into the ".data" section. This is a necessary for older runtimes not supporting modification of ".rdata" sections for pseudo-relocation. -mpe-aligned-commons This option is available for Cygwin and MinGW targets. It specifies that the GNU extension to the PE file format that permits the correct alignment of COMMON variables should be used when generating code. It is enabled by default if GCC detects that the target assembler found during configuration supports the feature. See also under x86 Options for standard options. Xstormy16 Options These options are defined for Xstormy16: -msim Choose startup files and linker script suitable for the simulator. Xtensa Options These options are supported for Xtensa targets: -mconst16 -mno-const16 Enable or disable use of "CONST16" instructions for loading constant values. The "CONST16" instruction is currently not a standard option from Tensilica. When enabled, "CONST16" instructions are always used in place of the standard "L32R" instructions. The use of "CONST16" is enabled by default only if the "L32R" instruction is not available. -mfused-madd -mno-fused-madd Enable or disable use of fused multiply/add and multiply/subtract instructions in the floating-point option. This has no effect if the floating-point option is not also enabled. Disabling fused multiply/add and multiply/subtract instructions forces the compiler to use separate instructions for the multiply and add/subtract operations. This may be desirable in some cases where strict IEEE 754-compliant results are required: the fused multiply add/subtract instructions do not round the intermediate result, thereby producing results with more bits of precision than specified by the IEEE standard. Disabling fused multiply add/subtract instructions also ensures that the program output is not sensitive to the compiler's ability to combine multiply and add/subtract operations. -mserialize-volatile -mno-serialize-volatile When this option is enabled, GCC inserts "MEMW" instructions before "volatile" memory references to guarantee sequential consistency. The default is -mserialize-volatile. Use -mno-serialize-volatile to omit the "MEMW" instructions. -mforce-no-pic For targets, like GNU/Linux, where all user-mode Xtensa code must be position-independent code (PIC), this option disables PIC for compiling kernel code. -mtext-section-literals -mno-text-section-literals These options control the treatment of literal pools. The default is -mno-text-section-literals, which places literals in a separate section in the output file. This allows the literal pool to be placed in a data RAM/ROM, and it also allows the linker to combine literal pools from separate object files to remove redundant literals and improve code size. With -mtext-section-literals, the literals are interspersed in the text section in order to keep them as close as possible to their references. This may be necessary for large assembly files. Literals for each function are placed right before that function. -mauto-litpools -mno-auto-litpools These options control the treatment of literal pools. The default is -mno-auto-litpools, which places literals in a separate section in the output file unless -mtext-section-literals is used. With -mauto-litpools the literals are interspersed in the text section by the assembler. Compiler does not produce explicit ".literal" directives and loads literals into registers with "MOVI" instructions instead of "L32R" to let the assembler do relaxation and place literals as necessary. This option allows assembler to create several literal pools per function and assemble very big functions, which may not be possible with -mtext-section-literals. -mtarget-align -mno-target-align When this option is enabled, GCC instructs the assembler to automatically align instructions to reduce branch penalties at the expense of some code density. The assembler attempts to widen density instructions to align branch targets and the instructions following call instructions. If there are not enough preceding safe density instructions to align a target, no widening is performed. The default is -mtarget-align. These options do not affect the treatment of auto-aligned instructions like "LOOP", which the assembler always aligns, either by widening density instructions or by inserting NOP instructions. -mlongcalls -mno-longcalls When this option is enabled, GCC instructs the assembler to translate direct calls to indirect calls unless it can determine that the target of a direct call is in the range allowed by the call instruction. This translation typically occurs for calls to functions in other source files. Specifically, the assembler translates a direct "CALL" instruction into an "L32R" followed by a "CALLX" instruction. The default is -mno-longcalls. This option should be used in programs where the call target can potentially be out of range. This option is implemented in the assembler, not the compiler, so the assembly code generated by GCC still shows direct call instructions---look at the disassembled object code to see the actual instructions. Note that the assembler uses an indirect call for every cross-file call, not just those that really are out of range. zSeries Options These are listed under ENVIRONMENT top This section describes several environment variables that affect how GCC operates. Some of them work by specifying directories or prefixes to use when searching for various kinds of files. Some are used to specify other aspects of the compilation environment. Note that you can also specify places to search using options such as -B, -I and -L. These take precedence over places specified using environment variables, which in turn take precedence over those specified by the configuration of GCC. LANG LC_CTYPE LC_MESSAGES LC_ALL These environment variables control the way that GCC uses localization information which allows GCC to work with different national conventions. GCC inspects the locale categories LC_CTYPE and LC_MESSAGES if it has been configured to do so. These locale categories can be set to any value supported by your installation. A typical value is en_GB.UTF-8 for English in the United Kingdom encoded in UTF-8. The LC_CTYPE environment variable specifies character classification. GCC uses it to determine the character boundaries in a string; this is needed for some multibyte encodings that contain quote and escape characters that are otherwise interpreted as a string end or escape. The LC_MESSAGES environment variable specifies the language to use in diagnostic messages. If the LC_ALL environment variable is set, it overrides the value of LC_CTYPE and LC_MESSAGES; otherwise, LC_CTYPE and LC_MESSAGES default to the value of the LANG environment variable. If none of these variables are set, GCC defaults to traditional C English behavior. TMPDIR If TMPDIR is set, it specifies the directory to use for temporary files. GCC uses temporary files to hold the output of one stage of compilation which is to be used as input to the next stage: for example, the output of the preprocessor, which is the input to the compiler proper. GCC_COMPARE_DEBUG Setting GCC_COMPARE_DEBUG is nearly equivalent to passing -fcompare-debug to the compiler driver. See the documentation of this option for more details. GCC_EXEC_PREFIX If GCC_EXEC_PREFIX is set, it specifies a prefix to use in the names of the subprograms executed by the compiler. No slash is added when this prefix is combined with the name of a subprogram, but you can specify a prefix that ends with a slash if you wish. If GCC_EXEC_PREFIX is not set, GCC attempts to figure out an appropriate prefix to use based on the pathname it is invoked with. If GCC cannot find the subprogram using the specified prefix, it tries looking in the usual places for the subprogram. The default value of GCC_EXEC_PREFIX is prefix/lib/gcc/ where prefix is the prefix to the installed compiler. In many cases prefix is the value of "prefix" when you ran the configure script. Other prefixes specified with -B take precedence over this prefix. This prefix is also used for finding files such as crt0.o that are used for linking. In addition, the prefix is used in an unusual way in finding the directories to search for header files. For each of the standard directories whose name normally begins with /usr/local/lib/gcc (more precisely, with the value of GCC_INCLUDE_DIR), GCC tries replacing that beginning with the specified prefix to produce an alternate directory name. Thus, with -Bfoo/, GCC searches foo/bar just before it searches the standard directory /usr/local/lib/bar. If a standard directory begins with the configured prefix then the value of prefix is replaced by GCC_EXEC_PREFIX when looking for header files. COMPILER_PATH The value of COMPILER_PATH is a colon-separated list of directories, much like PATH. GCC tries the directories thus specified when searching for subprograms, if it cannot find the subprograms using GCC_EXEC_PREFIX. LIBRARY_PATH The value of LIBRARY_PATH is a colon-separated list of directories, much like PATH. When configured as a native compiler, GCC tries the directories thus specified when searching for special linker files, if it cannot find them using GCC_EXEC_PREFIX. Linking using GCC also uses these directories when searching for ordinary libraries for the -l option (but directories specified with -L come first). LANG This variable is used to pass locale information to the compiler. One way in which this information is used is to determine the character set to be used when character literals, string literals and comments are parsed in C and C++. When the compiler is configured to allow multibyte characters, the following values for LANG are recognized: C-JIS Recognize JIS characters. C-SJIS Recognize SJIS characters. C-EUCJP Recognize EUCJP characters. If LANG is not defined, or if it has some other value, then the compiler uses "mblen" and "mbtowc" as defined by the default locale to recognize and translate multibyte characters. Some additional environment variables affect the behavior of the preprocessor. CPATH C_INCLUDE_PATH CPLUS_INCLUDE_PATH OBJC_INCLUDE_PATH Each variable's value is a list of directories separated by a special character, much like PATH, in which to look for header files. The special character, "PATH_SEPARATOR", is target-dependent and determined at GCC build time. For Microsoft Windows-based targets it is a semicolon, and for almost all other targets it is a colon. CPATH specifies a list of directories to be searched as if specified with -I, but after any paths given with -I options on the command line. This environment variable is used regardless of which language is being preprocessed. The remaining environment variables apply only when preprocessing the particular language indicated. Each specifies a list of directories to be searched as if specified with -isystem, but after any paths given with -isystem options on the command line. In all these variables, an empty element instructs the compiler to search its current working directory. Empty elements can appear at the beginning or end of a path. For instance, if the value of CPATH is ":/special/include", that has the same effect as -I. -I/special/include. DEPENDENCIES_OUTPUT If this variable is set, its value specifies how to output dependencies for Make based on the non-system header files processed by the compiler. System header files are ignored in the dependency output. The value of DEPENDENCIES_OUTPUT can be just a file name, in which case the Make rules are written to that file, guessing the target name from the source file name. Or the value can have the form file target, in which case the rules are written to file file using target as the target name. In other words, this environment variable is equivalent to combining the options -MM and -MF, with an optional -MT switch too. SUNPRO_DEPENDENCIES This variable is the same as DEPENDENCIES_OUTPUT (see above), except that system header files are not ignored, so it implies -M rather than -MM. However, the dependence on the main input file is omitted. SOURCE_DATE_EPOCH If this variable is set, its value specifies a UNIX timestamp to be used in replacement of the current date and time in the "__DATE__" and "__TIME__" macros, so that the embedded timestamps become reproducible. The value of SOURCE_DATE_EPOCH must be a UNIX timestamp, defined as the number of seconds (excluding leap seconds) since 01 Jan 1970 00:00:00 represented in ASCII; identical to the output of @command{date +%s} on GNU/Linux and other systems that support the %s extension in the "date" command. The value should be a known timestamp such as the last modification time of the source or package and it should be set by the build process. BUGS top For instructions on reporting bugs, see <https://gcc.gnu.org/bugs/ >. FOOTNOTES top 1. On some systems, gcc -shared needs to build supplementary stub code for constructors to work. On multi-libbed systems, gcc -shared must select the correct support libraries to link against. Failing to supply the correct flags may lead to subtle defects. Supplying them in cases where they are not necessary is innocuous. SEE ALSO top gpl(7), gfdl(7), fsf-funding(7), cpp(1), gcov(1), as(1), ld(1), gdb(1), dbx(1) and the Info entries for gcc, cpp, as, ld, binutils and gdb. AUTHOR top See the Info entry for gcc, or <http://gcc.gnu.org/onlinedocs/gcc/Contributors.html >, for contributors to GCC. COPYRIGHT top Copyright (c) 1988-2019 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being "GNU General Public License" and "Funding Free Software", the Front-Cover texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the gfdl(7) man page. (a) The FSF's Front-Cover Text is: A GNU Manual (b) The FSF's Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. COLOPHON top This page is part of the gcc (GNU Compiler Collection) project. Information about the project can be found at http://gcc.gnu.org/. If you have a bug report for this manual page, see http://gcc.gnu.org/bugs/. This page was obtained from the tarball gcc-9.5.0.tar.xz fetched from ftp://ftp.gwdg.de/pub/misc/gcc/releases/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org gcc-9.5.0 2022-05-27 GCC(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# g++\n\n> Compiles C++ source files.\n> Part of GCC (GNU Compiler Collection).\n> More information: <https://gcc.gnu.org>.\n\n- Compile a source code file into an executable binary:\n\n`g++ {{path/to/source.cpp}} -o {{path/to/output_executable}}`\n\n- Display common warnings:\n\n`g++ {{path/to/source.cpp}} -Wall -o {{path/to/output_executable}}`\n\n- Choose a language standard to compile for (C++98/C++11/C++14/C++17):\n\n`g++ {{path/to/source.cpp}} -std={{c++98|c++11|c++14|c++17}} -o {{path/to/output_executable}}`\n\n- Include libraries located at a different path than the source file:\n\n`g++ {{path/to/source.cpp}} -o {{path/to/output_executable}} -I{{path/to/header}} -L{{path/to/library}} -l{{library_name}}`\n\n- Compile and link multiple source code files into an executable binary:\n\n`g++ -c {{path/to/source1.cpp path/to/source2.cpp ...}} && g++ -o {{path/to/output_executable}} {{path/to/source1.o path/to/source2.o ...}}`\n\n- Optimize the compiled program for performance:\n\n`g++ {{path/to/source.cpp}} -O{{1|2|3|fast}} -o {{path/to/output_executable}}`\n\n- Display version:\n\n`g++ --version`\n
gcc
gcc(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training gcc(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT | BUGS | FOOTNOTES | SEE ALSO | AUTHOR | COPYRIGHT | COLOPHON GCC(1) GNU GCC(1) NAME top gcc - GNU project C and C++ compiler SYNOPSIS top gcc [-c|-S|-E] [-std=standard] [-g] [-pg] [-Olevel] [-Wwarn...] [-Wpedantic] [-Idir...] [-Ldir...] [-Dmacro[=defn]...] [-Umacro] [-foption...] [-mmachine-option...] [-o outfile] [@file] infile... Only the most useful options are listed here; see below for the remainder. g++ accepts mostly the same options as gcc. DESCRIPTION top When you invoke GCC, it normally does preprocessing, compilation, assembly and linking. The "overall options" allow you to stop this process at an intermediate stage. For example, the -c option says not to run the linker. Then the output consists of object files output by the assembler. Other options are passed on to one or more stages of processing. Some options control the preprocessor and others the compiler itself. Yet other options control the assembler and linker; most of these are not documented here, since you rarely need to use any of them. Most of the command-line options that you can use with GCC are useful for C programs; when an option is only useful with another language (usually C++), the explanation says so explicitly. If the description for a particular option does not mention a source language, you can use that option with all supported languages. The usual way to run GCC is to run the executable called gcc, or machine-gcc when cross-compiling, or machine-gcc-version to run a specific version of GCC. When you compile C++ programs, you should invoke GCC as g++ instead. The gcc program accepts options and file names as operands. Many options have multi-letter names; therefore multiple single-letter options may not be grouped: -dv is very different from -d -v. You can mix options and other arguments. For the most part, the order you use doesn't matter. Order does matter when you use several options of the same kind; for example, if you specify -L more than once, the directories are searched in the order specified. Also, the placement of the -l option is significant. Many options have long names starting with -f or with -W---for example, -fmove-loop-invariants, -Wformat and so on. Most of these have both positive and negative forms; the negative form of -ffoo is -fno-foo. This manual documents only one of these two forms, whichever one is not the default. Some options take one or more arguments typically separated either by a space or by the equals sign (=) from the option name. Unless documented otherwise, an argument can be either numeric or a string. Numeric arguments must typically be small unsigned decimal or hexadecimal integers. Hexadecimal arguments must begin with the 0x prefix. Arguments to options that specify a size threshold of some sort may be arbitrarily large decimal or hexadecimal integers followed by a byte size suffix designating a multiple of bytes such as "kB" and "KiB" for kilobyte and kibibyte, respectively, "MB" and "MiB" for megabyte and mebibyte, "GB" and "GiB" for gigabyte and gigibyte, and so on. Such arguments are designated by byte-size in the following text. Refer to the NIST, IEC, and other relevant national and international standards for the full listing and explanation of the binary and decimal byte size prefixes. OPTIONS top Option Summary Here is a summary of all the options, grouped by type. Explanations are in the following sections. Overall Options -c -S -E -o file -x language -v -### --help[=class[,...]] --target-help --version -pass-exit-codes -pipe -specs=file -wrapper @file -ffile-prefix-map=old=new -fplugin=file -fplugin-arg-name=arg -fdump-ada-spec[-slim] -fada-spec-parent=unit -fdump-go-spec=file C Language Options -ansi -std=standard -fgnu89-inline -fpermitted-flt-eval-methods=standard -aux-info filename -fallow-parameterless-variadic-functions -fno-asm -fno-builtin -fno-builtin-function -fgimple -fhosted -ffreestanding -fopenacc -fopenacc-dim=geom -fopenmp -fopenmp-simd -fms-extensions -fplan9-extensions -fsso-struct=endianness -fallow-single-precision -fcond-mismatch -flax-vector-conversions -fsigned-bitfields -fsigned-char -funsigned-bitfields -funsigned-char C++ Language Options -fabi-version=n -fno-access-control -faligned-new=n -fargs-in-order=n -fchar8_t -fcheck-new -fconstexpr-depth=n -fconstexpr-loop-limit=n -fconstexpr-ops-limit=n -fno-elide-constructors -fno-enforce-eh-specs -fno-gnu-keywords -fno-implicit-templates -fno-implicit-inline-templates -fno-implement-inlines -fms-extensions -fnew-inheriting-ctors -fnew-ttp-matching -fno-nonansi-builtins -fnothrow-opt -fno-operator-names -fno-optional-diags -fpermissive -fno-pretty-templates -frepo -fno-rtti -fsized-deallocation -ftemplate-backtrace-limit=n -ftemplate-depth=n -fno-threadsafe-statics -fuse-cxa-atexit -fno-weak -nostdinc++ -fvisibility-inlines-hidden -fvisibility-ms-compat -fext-numeric-literals -Wabi=n -Wabi-tag -Wconversion-null -Wctor-dtor-privacy -Wdelete-non-virtual-dtor -Wdeprecated-copy -Wdeprecated-copy-dtor -Wliteral-suffix -Wmultiple-inheritance -Wno-init-list-lifetime -Wnamespaces -Wnarrowing -Wpessimizing-move -Wredundant-move -Wnoexcept -Wnoexcept-type -Wclass-memaccess -Wnon-virtual-dtor -Wreorder -Wregister -Weffc++ -Wstrict-null-sentinel -Wtemplates -Wno-non-template-friend -Wold-style-cast -Woverloaded-virtual -Wno-pmf-conversions -Wno-class-conversion -Wno-terminate -Wsign-promo -Wvirtual-inheritance Objective-C and Objective-C++ Language Options -fconstant-string-class=class-name -fgnu-runtime -fnext-runtime -fno-nil-receivers -fobjc-abi-version=n -fobjc-call-cxx-cdtors -fobjc-direct-dispatch -fobjc-exceptions -fobjc-gc -fobjc-nilcheck -fobjc-std=objc1 -fno-local-ivars -fivar-visibility=[public|protected|private|package] -freplace-objc-classes -fzero-link -gen-decls -Wassign-intercept -Wno-protocol -Wselector -Wstrict-selector-match -Wundeclared-selector Diagnostic Message Formatting Options -fmessage-length=n -fdiagnostics-show-location=[once|every- line] -fdiagnostics-color=[auto|never|always] -fdiagnostics-format=[text|json] -fno-diagnostics-show-option -fno-diagnostics-show-caret -fno-diagnostics-show-labels -fno-diagnostics-show-line-numbers -fdiagnostics-minimum-margin-width=width -fdiagnostics-parseable-fixits -fdiagnostics-generate-patch -fdiagnostics-show-template-tree -fno-elide-type -fno-show-column Warning Options -fsyntax-only -fmax-errors=n -Wpedantic -pedantic-errors -w -Wextra -Wall -Waddress -Waddress-of-packed-member -Waggregate-return -Waligned-new -Walloc-zero -Walloc-size-larger-than=byte-size -Walloca -Walloca-larger-than=byte-size -Wno-aggressive-loop-optimizations -Warray-bounds -Warray-bounds=n -Wno-attributes -Wattribute-alias=n -Wbool-compare -Wbool-operation -Wno-builtin-declaration-mismatch -Wno-builtin-macro-redefined -Wc90-c99-compat -Wc99-c11-compat -Wc11-c2x-compat -Wc++-compat -Wc++11-compat -Wc++14-compat -Wc++17-compat -Wcast-align -Wcast-align=strict -Wcast-function-type -Wcast-qual -Wchar-subscripts -Wcatch-value -Wcatch-value=n -Wclobbered -Wcomment -Wconditionally-supported -Wconversion -Wcoverage-mismatch -Wno-cpp -Wdangling-else -Wdate-time -Wdelete-incomplete -Wno-attribute-warning -Wno-deprecated -Wno-deprecated-declarations -Wno-designated-init -Wdisabled-optimization -Wno-discarded-qualifiers -Wno-discarded-array-qualifiers -Wno-div-by-zero -Wdouble-promotion -Wduplicated-branches -Wduplicated-cond -Wempty-body -Wenum-compare -Wno-endif-labels -Wexpansion-to-defined -Werror -Werror=* -Wextra-semi -Wfatal-errors -Wfloat-equal -Wformat -Wformat=2 -Wno-format-contains-nul -Wno-format-extra-args -Wformat-nonliteral -Wformat-overflow=n -Wformat-security -Wformat-signedness -Wformat-truncation=n -Wformat-y2k -Wframe-address -Wframe-larger-than=byte-size -Wno-free-nonheap-object -Wjump-misses-init -Whsa -Wif-not-aligned -Wignored-qualifiers -Wignored-attributes -Wincompatible-pointer-types -Wimplicit -Wimplicit-fallthrough -Wimplicit-fallthrough=n -Wimplicit-function-declaration -Wimplicit-int -Winit-self -Winline -Wno-int-conversion -Wint-in-bool-context -Wno-int-to-pointer-cast -Winvalid-memory-model -Wno-invalid-offsetof -Winvalid-pch -Wlarger-than=byte-size -Wlogical-op -Wlogical-not-parentheses -Wlong-long -Wmain -Wmaybe-uninitialized -Wmemset-elt-size -Wmemset-transposed-args -Wmisleading-indentation -Wmissing-attributes -Wmissing-braces -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-include-dirs -Wmissing-noreturn -Wmissing-profile -Wno-multichar -Wmultistatement-macros -Wnonnull -Wnonnull-compare -Wnormalized=[none|id|nfc|nfkc] -Wnull-dereference -Wodr -Wno-overflow -Wopenmp-simd -Woverride-init-side-effects -Woverlength-strings -Wpacked -Wpacked-bitfield-compat -Wpacked-not-aligned -Wpadded -Wparentheses -Wno-pedantic-ms-format -Wplacement-new -Wplacement-new=n -Wpointer-arith -Wpointer-compare -Wno-pointer-to-int-cast -Wno-pragmas -Wno-prio-ctor-dtor -Wredundant-decls -Wrestrict -Wno-return-local-addr -Wreturn-type -Wsequence-point -Wshadow -Wno-shadow-ivar -Wshadow=global, -Wshadow=local, -Wshadow=compatible-local -Wshift-overflow -Wshift-overflow=n -Wshift-count-negative -Wshift-count-overflow -Wshift-negative-value -Wsign-compare -Wsign-conversion -Wfloat-conversion -Wno-scalar-storage-order -Wsizeof-pointer-div -Wsizeof-pointer-memaccess -Wsizeof-array-argument -Wstack-protector -Wstack-usage=byte-size -Wstrict-aliasing -Wstrict-aliasing=n -Wstrict-overflow -Wstrict-overflow=n -Wstringop-overflow=n -Wstringop-truncation -Wsubobject-linkage -Wsuggest-attribute=[pure|const|noreturn|format|malloc] -Wsuggest-final-types -Wsuggest-final-methods -Wsuggest-override -Wswitch -Wswitch-bool -Wswitch-default -Wswitch-enum -Wswitch-unreachable -Wsync-nand -Wsystem-headers -Wtautological-compare -Wtrampolines -Wtrigraphs -Wtype-limits -Wundef -Wuninitialized -Wunknown-pragmas -Wunsuffixed-float-constants -Wunused -Wunused-function -Wunused-label -Wunused-local-typedefs -Wunused-macros -Wunused-parameter -Wno-unused-result -Wunused-value -Wunused-variable -Wunused-const-variable -Wunused-const-variable=n -Wunused-but-set-parameter -Wunused-but-set-variable -Wuseless-cast -Wvariadic-macros -Wvector-operation-performance -Wvla -Wvla-larger-than=byte- size -Wvolatile-register-var -Wwrite-strings -Wzero-as-null-pointer-constant C and Objective-C-only Warning Options -Wbad-function-cast -Wmissing-declarations -Wmissing-parameter-type -Wmissing-prototypes -Wnested-externs -Wold-style-declaration -Wold-style-definition -Wstrict-prototypes -Wtraditional -Wtraditional-conversion -Wdeclaration-after-statement -Wpointer-sign Debugging Options -g -glevel -gdwarf -gdwarf-version -ggdb -grecord-gcc-switches -gno-record-gcc-switches -gstabs -gstabs+ -gstrict-dwarf -gno-strict-dwarf -gas-loc-support -gno-as-loc-support -gas-locview-support -gno-as-locview-support -gcolumn-info -gno-column-info -gstatement-frontiers -gno-statement-frontiers -gvariable-location-views -gno-variable-location-views -ginternal-reset-location-views -gno-internal-reset-location-views -ginline-points -gno-inline-points -gvms -gxcoff -gxcoff+ -gz[=type] -gsplit-dwarf -gdescribe-dies -gno-describe-dies -fdebug-prefix-map=old=new -fdebug-types-section -fno-eliminate-unused-debug-types -femit-struct-debug-baseonly -femit-struct-debug-reduced -femit-struct-debug-detailed[=spec-list] -feliminate-unused-debug-symbols -femit-class-debug-always -fno-merge-debug-strings -fno-dwarf2-cfi-asm -fvar-tracking -fvar-tracking-assignments Optimization Options -faggressive-loop-optimizations -falign-functions[=n[:m:[n2[:m2]]]] -falign-jumps[=n[:m:[n2[:m2]]]] -falign-labels[=n[:m:[n2[:m2]]]] -falign-loops[=n[:m:[n2[:m2]]]] -fassociative-math -fauto-profile -fauto-profile[=path] -fauto-inc-dec -fbranch-probabilities -fbranch-target-load-optimize -fbranch-target-load-optimize2 -fbtr-bb-exclusive -fcaller-saves -fcombine-stack-adjustments -fconserve-stack -fcompare-elim -fcprop-registers -fcrossjumping -fcse-follow-jumps -fcse-skip-blocks -fcx-fortran-rules -fcx-limited-range -fdata-sections -fdce -fdelayed-branch -fdelete-null-pointer-checks -fdevirtualize -fdevirtualize-speculatively -fdevirtualize-at-ltrans -fdse -fearly-inlining -fipa-sra -fexpensive-optimizations -ffat-lto-objects -ffast-math -ffinite-math-only -ffloat-store -fexcess-precision=style -fforward-propagate -ffp-contract=style -ffunction-sections -fgcse -fgcse-after-reload -fgcse-las -fgcse-lm -fgraphite-identity -fgcse-sm -fhoist-adjacent-loads -fif-conversion -fif-conversion2 -findirect-inlining -finline-functions -finline-functions-called-once -finline-limit=n -finline-small-functions -fipa-cp -fipa-cp-clone -fipa-bit-cp -fipa-vrp -fipa-pta -fipa-profile -fipa-pure-const -fipa-reference -fipa-reference-addressable -fipa-stack-alignment -fipa-icf -fira-algorithm=algorithm -flive-patching=level -fira-region=region -fira-hoist-pressure -fira-loop-pressure -fno-ira-share-save-slots -fno-ira-share-spill-slots -fisolate-erroneous-paths-dereference -fisolate-erroneous-paths-attribute -fivopts -fkeep-inline-functions -fkeep-static-functions -fkeep-static-consts -flimit-function-alignment -flive-range-shrinkage -floop-block -floop-interchange -floop-strip-mine -floop-unroll-and-jam -floop-nest-optimize -floop-parallelize-all -flra-remat -flto -flto-compression-level -flto-partition=alg -fmerge-all-constants -fmerge-constants -fmodulo-sched -fmodulo-sched-allow-regmoves -fmove-loop-invariants -fno-branch-count-reg -fno-defer-pop -fno-fp-int-builtin-inexact -fno-function-cse -fno-guess-branch-probability -fno-inline -fno-math-errno -fno-peephole -fno-peephole2 -fno-printf-return-value -fno-sched-interblock -fno-sched-spec -fno-signed-zeros -fno-toplevel-reorder -fno-trapping-math -fno-zero-initialized-in-bss -fomit-frame-pointer -foptimize-sibling-calls -fpartial-inlining -fpeel-loops -fpredictive-commoning -fprefetch-loop-arrays -fprofile-correction -fprofile-use -fprofile-use=path -fprofile-values -fprofile-reorder-functions -freciprocal-math -free -frename-registers -freorder-blocks -freorder-blocks-algorithm=algorithm -freorder-blocks-and-partition -freorder-functions -frerun-cse-after-loop -freschedule-modulo-scheduled-loops -frounding-math -fsave-optimization-record -fsched2-use-superblocks -fsched-pressure -fsched-spec-load -fsched-spec-load-dangerous -fsched-stalled-insns-dep[=n] -fsched-stalled-insns[=n] -fsched-group-heuristic -fsched-critical-path-heuristic -fsched-spec-insn-heuristic -fsched-rank-heuristic -fsched-last-insn-heuristic -fsched-dep-count-heuristic -fschedule-fusion -fschedule-insns -fschedule-insns2 -fsection-anchors -fselective-scheduling -fselective-scheduling2 -fsel-sched-pipelining -fsel-sched-pipelining-outer-loops -fsemantic-interposition -fshrink-wrap -fshrink-wrap-separate -fsignaling-nans -fsingle-precision-constant -fsplit-ivs-in-unroller -fsplit-loops -fsplit-paths -fsplit-wide-types -fssa-backprop -fssa-phiopt -fstdarg-opt -fstore-merging -fstrict-aliasing -fthread-jumps -ftracer -ftree-bit-ccp -ftree-builtin-call-dce -ftree-ccp -ftree-ch -ftree-coalesce-vars -ftree-copy-prop -ftree-dce -ftree-dominator-opts -ftree-dse -ftree-forwprop -ftree-fre -fcode-hoisting -ftree-loop-if-convert -ftree-loop-im -ftree-phiprop -ftree-loop-distribution -ftree-loop-distribute-patterns -ftree-loop-ivcanon -ftree-loop-linear -ftree-loop-optimize -ftree-loop-vectorize -ftree-parallelize-loops=n -ftree-pre -ftree-partial-pre -ftree-pta -ftree-reassoc -ftree-scev-cprop -ftree-sink -ftree-slsr -ftree-sra -ftree-switch-conversion -ftree-tail-merge -ftree-ter -ftree-vectorize -ftree-vrp -funconstrained-commons -funit-at-a-time -funroll-all-loops -funroll-loops -funsafe-math-optimizations -funswitch-loops -fipa-ra -fvariable-expansion-in-unroller -fvect-cost-model -fvpt -fweb -fwhole-program -fwpa -fuse-linker-plugin --param name=value -O -O0 -O1 -O2 -O3 -Os -Ofast -Og Program Instrumentation Options -p -pg -fprofile-arcs --coverage -ftest-coverage -fprofile-abs-path -fprofile-dir=path -fprofile-generate -fprofile-generate=path -fprofile-update=method -fprofile-filter-files=regex -fprofile-exclude-files=regex -fsanitize=style -fsanitize-recover -fsanitize-recover=style -fasan-shadow-offset=number -fsanitize-sections=s1,s2,... -fsanitize-undefined-trap-on-error -fbounds-check -fcf-protection=[full|branch|return|none] -fstack-protector -fstack-protector-all -fstack-protector-strong -fstack-protector-explicit -fstack-check -fstack-limit-register=reg -fstack-limit-symbol=sym -fno-stack-limit -fsplit-stack -fvtable-verify=[std|preinit|none] -fvtv-counts -fvtv-debug -finstrument-functions -finstrument-functions-exclude-function-list=sym,sym,... -finstrument-functions-exclude-file-list=file,file,... Preprocessor Options -Aquestion=answer -A-question[=answer] -C -CC -Dmacro[=defn] -dD -dI -dM -dN -dU -fdebug-cpp -fdirectives-only -fdollars-in-identifiers -fexec-charset=charset -fextended-identifiers -finput-charset=charset -fmacro-prefix-map=old=new -fno-canonical-system-headers -fpch-deps -fpch-preprocess -fpreprocessed -ftabstop=width -ftrack-macro-expansion -fwide-exec-charset=charset -fworking-directory -H -imacros file -include file -M -MD -MF -MG -MM -MMD -MP -MQ -MT -no-integrated-cpp -P -pthread -remap -traditional -traditional-cpp -trigraphs -Umacro -undef -Wp,option -Xpreprocessor option Assembler Options -Wa,option -Xassembler option Linker Options object-file-name -fuse-ld=linker -llibrary -nostartfiles -nodefaultlibs -nolibc -nostdlib -e entry --entry=entry -pie -pthread -r -rdynamic -s -static -static-pie -static-libgcc -static-libstdc++ -static-libasan -static-libtsan -static-liblsan -static-libubsan -shared -shared-libgcc -symbolic -T script -Wl,option -Xlinker option -u symbol -z keyword Directory Options -Bprefix -Idir -I- -idirafter dir -imacros file -imultilib dir -iplugindir=dir -iprefix file -iquote dir -isysroot dir -isystem dir -iwithprefix dir -iwithprefixbefore dir -Ldir -no-canonical-prefixes --no-sysroot-suffix -nostdinc -nostdinc++ --sysroot=dir Code Generation Options -fcall-saved-reg -fcall-used-reg -ffixed-reg -fexceptions -fnon-call-exceptions -fdelete-dead-exceptions -funwind-tables -fasynchronous-unwind-tables -fno-gnu-unique -finhibit-size-directive -fno-common -fno-ident -fpcc-struct-return -fpic -fPIC -fpie -fPIE -fno-plt -fno-jump-tables -frecord-gcc-switches -freg-struct-return -fshort-enums -fshort-wchar -fverbose-asm -fpack-struct[=n] -fleading-underscore -ftls-model=model -fstack-reuse=reuse_level -ftrampolines -ftrapv -fwrapv -fvisibility=[default|internal|hidden|protected] -fstrict-volatile-bitfields -fsync-libcalls Developer Options -dletters -dumpspecs -dumpmachine -dumpversion -dumpfullversion -fchecking -fchecking=n -fdbg-cnt-list -fdbg-cnt=counter-value-list -fdisable-ipa-pass_name -fdisable-rtl-pass_name -fdisable-rtl-pass-name=range-list -fdisable-tree-pass_name -fdisable-tree-pass-name=range-list -fdump-debug -fdump-earlydebug -fdump-noaddr -fdump-unnumbered -fdump-unnumbered-links -fdump-final-insns[=file] -fdump-ipa-all -fdump-ipa-cgraph -fdump-ipa-inline -fdump-lang-all -fdump-lang-switch -fdump-lang-switch-options -fdump-lang-switch-options=filename -fdump-passes -fdump-rtl-pass -fdump-rtl-pass=filename -fdump-statistics -fdump-tree-all -fdump-tree-switch -fdump-tree-switch-options -fdump-tree-switch-options=filename -fcompare-debug[=opts] -fcompare-debug-second -fenable-kind-pass -fenable-kind-pass=range-list -fira-verbose=n -flto-report -flto-report-wpa -fmem-report-wpa -fmem-report -fpre-ipa-mem-report -fpost-ipa-mem-report -fopt-info -fopt-info-options[=file] -fprofile-report -frandom-seed=string -fsched-verbose=n -fsel-sched-verbose -fsel-sched-dump-cfg -fsel-sched-pipelining-verbose -fstats -fstack-usage -ftime-report -ftime-report-details -fvar-tracking-assignments-toggle -gtoggle -print-file-name=library -print-libgcc-file-name -print-multi-directory -print-multi-lib -print-multi-os-directory -print-prog-name=program -print-search-dirs -Q -print-sysroot -print-sysroot-headers-suffix -save-temps -save-temps=cwd -save-temps=obj -time[=file] Machine-Dependent Options AArch64 Options -mabi=name -mbig-endian -mlittle-endian -mgeneral-regs-only -mcmodel=tiny -mcmodel=small -mcmodel=large -mstrict-align -mno-strict-align -momit-leaf-frame-pointer -mtls-dialect=desc -mtls-dialect=traditional -mtls-size=size -mfix-cortex-a53-835769 -mfix-cortex-a53-843419 -mlow-precision-recip-sqrt -mlow-precision-sqrt -mlow-precision-div -mpc-relative-literal-loads -msign-return-address=scope -mbranch-protection=none|standard|pac-ret[+leaf]|bti -mharden-sls=opts -march=name -mcpu=name -mtune=name -moverride=string -mverbose-cost-dump -mstack-protector-guard=guard -mstack-protector-guard-reg=sysreg -mstack-protector-guard-offset=offset -mtrack-speculation -moutline-atomics Adapteva Epiphany Options -mhalf-reg-file -mprefer-short-insn-regs -mbranch-cost=num -mcmove -mnops=num -msoft-cmpsf -msplit-lohi -mpost-inc -mpost-modify -mstack-offset=num -mround-nearest -mlong-calls -mshort-calls -msmall16 -mfp-mode=mode -mvect-double -max-vect-align=num -msplit-vecmove-early -m1reg-reg AMD GCN Options -march=gpu -mtune=gpu -mstack-size=bytes ARC Options -mbarrel-shifter -mjli-always -mcpu=cpu -mA6 -mARC600 -mA7 -mARC700 -mdpfp -mdpfp-compact -mdpfp-fast -mno-dpfp-lrsr -mea -mno-mpy -mmul32x16 -mmul64 -matomic -mnorm -mspfp -mspfp-compact -mspfp-fast -msimd -msoft-float -mswap -mcrc -mdsp-packa -mdvbf -mlock -mmac-d16 -mmac-24 -mrtsc -mswape -mtelephony -mxy -misize -mannotate-align -marclinux -marclinux_prof -mlong-calls -mmedium-calls -msdata -mirq-ctrl-saved -mrgf-banked-regs -mlpc-width=width -G num -mvolatile-cache -mtp-regno=regno -malign-call -mauto-modify-reg -mbbit-peephole -mno-brcc -mcase-vector-pcrel -mcompact-casesi -mno-cond-exec -mearly-cbranchsi -mexpand-adddi -mindexed-loads -mlra -mlra-priority-none -mlra-priority-compact mlra-priority-noncompact -mmillicode -mmixed-code -mq-class -mRcq -mRcw -msize-level=level -mtune=cpu -mmultcost=num -mcode-density-frame -munalign-prob-threshold=probability -mmpy-option=multo -mdiv-rem -mcode-density -mll64 -mfpu=fpu -mrf16 -mbranch-index ARM Options -mapcs-frame -mno-apcs-frame -mabi=name -mapcs-stack-check -mno-apcs-stack-check -mapcs-reentrant -mno-apcs-reentrant -mgeneral-regs-only -msched-prolog -mno-sched-prolog -mlittle-endian -mbig-endian -mbe8 -mbe32 -mfloat-abi=name -mfp16-format=name -mthumb-interwork -mno-thumb-interwork -mcpu=name -march=name -mfpu=name -mtune=name -mprint-tune-info -mstructure-size-boundary=n -mabort-on-noreturn -mlong-calls -mno-long-calls -msingle-pic-base -mno-single-pic-base -mpic-register=reg -mnop-fun-dllimport -mpoke-function-name -mthumb -marm -mflip-thumb -mtpcs-frame -mtpcs-leaf-frame -mcaller-super-interworking -mcallee-super-interworking -mtp=name -mtls-dialect=dialect -mword-relocations -mfix-cortex-m3-ldrd -munaligned-access -mneon-for-64bits -mslow-flash-data -masm-syntax-unified -mrestrict-it -mverbose-cost-dump -mpure-code -mcmse AVR Options -mmcu=mcu -mabsdata -maccumulate-args -mbranch-cost=cost -mcall-prologues -mgas-isr-prologues -mint8 -mn_flash=size -mno-interrupts -mmain-is-OS_task -mrelax -mrmw -mstrict-X -mtiny-stack -mfract-convert-truncate -mshort-calls -nodevicelib -nodevicespecs -Waddr-space-convert -Wmisspelled-isr Blackfin Options -mcpu=cpu[-sirevision] -msim -momit-leaf-frame-pointer -mno-omit-leaf-frame-pointer -mspecld-anomaly -mno-specld-anomaly -mcsync-anomaly -mno-csync-anomaly -mlow-64k -mno-low64k -mstack-check-l1 -mid-shared-library -mno-id-shared-library -mshared-library-id=n -mleaf-id-shared-library -mno-leaf-id-shared-library -msep-data -mno-sep-data -mlong-calls -mno-long-calls -mfast-fp -minline-plt -mmulticore -mcorea -mcoreb -msdram -micplb C6X Options -mbig-endian -mlittle-endian -march=cpu -msim -msdata=sdata-type CRIS Options -mcpu=cpu -march=cpu -mtune=cpu -mmax-stack-frame=n -melinux-stacksize=n -metrax4 -metrax100 -mpdebug -mcc-init -mno-side-effects -mstack-align -mdata-align -mconst-align -m32-bit -m16-bit -m8-bit -mno-prologue-epilogue -mno-gotplt -melf -maout -melinux -mlinux -sim -sim2 -mmul-bug-workaround -mno-mul-bug-workaround CR16 Options -mmac -mcr16cplus -mcr16c -msim -mint32 -mbit-ops -mdata-model=model C-SKY Options -march=arch -mcpu=cpu -mbig-endian -EB -mlittle-endian -EL -mhard-float -msoft-float -mfpu=fpu -mdouble-float -mfdivdu -melrw -mistack -mmp -mcp -mcache -msecurity -mtrust -mdsp -medsp -mvdsp -mdiv -msmart -mhigh-registers -manchor -mpushpop -mmultiple-stld -mconstpool -mstack-size -mccrt -mbranch-cost=n -mcse-cc -msched-prolog Darwin Options -all_load -allowable_client -arch -arch_errors_fatal -arch_only -bind_at_load -bundle -bundle_loader -client_name -compatibility_version -current_version -dead_strip -dependency-file -dylib_file -dylinker_install_name -dynamic -dynamiclib -exported_symbols_list -filelist -flat_namespace -force_cpusubtype_ALL -force_flat_namespace -headerpad_max_install_names -iframework -image_base -init -install_name -keep_private_externs -multi_module -multiply_defined -multiply_defined_unused -noall_load -no_dead_strip_inits_and_terms -nofixprebinding -nomultidefs -noprebind -noseglinkedit -pagezero_size -prebind -prebind_all_twolevel_modules -private_bundle -read_only_relocs -sectalign -sectobjectsymbols -whyload -seg1addr -sectcreate -sectobjectsymbols -sectorder -segaddr -segs_read_only_addr -segs_read_write_addr -seg_addr_table -seg_addr_table_filename -seglinkedit -segprot -segs_read_only_addr -segs_read_write_addr -single_module -static -sub_library -sub_umbrella -twolevel_namespace -umbrella -undefined -unexported_symbols_list -weak_reference_mismatches -whatsloaded -F -gused -gfull -mmacosx-version-min=version -mkernel -mone-byte-bool DEC Alpha Options -mno-fp-regs -msoft-float -mieee -mieee-with-inexact -mieee-conformant -mfp-trap-mode=mode -mfp-rounding-mode=mode -mtrap-precision=mode -mbuild-constants -mcpu=cpu-type -mtune=cpu-type -mbwx -mmax -mfix -mcix -mfloat-vax -mfloat-ieee -mexplicit-relocs -msmall-data -mlarge-data -msmall-text -mlarge-text -mmemory-latency=time FR30 Options -msmall-model -mno-lsim FT32 Options -msim -mlra -mnodiv -mft32b -mcompress -mnopm FRV Options -mgpr-32 -mgpr-64 -mfpr-32 -mfpr-64 -mhard-float -msoft-float -malloc-cc -mfixed-cc -mdword -mno-dword -mdouble -mno-double -mmedia -mno-media -mmuladd -mno-muladd -mfdpic -minline-plt -mgprel-ro -multilib-library-pic -mlinked-fp -mlong-calls -malign-labels -mlibrary-pic -macc-4 -macc-8 -mpack -mno-pack -mno-eflags -mcond-move -mno-cond-move -moptimize-membar -mno-optimize-membar -mscc -mno-scc -mcond-exec -mno-cond-exec -mvliw-branch -mno-vliw-branch -mmulti-cond-exec -mno-multi-cond-exec -mnested-cond-exec -mno-nested-cond-exec -mtomcat-stats -mTLS -mtls -mcpu=cpu GNU/Linux Options -mglibc -muclibc -mmusl -mbionic -mandroid -tno-android-cc -tno-android-ld H8/300 Options -mrelax -mh -ms -mn -mexr -mno-exr -mint32 -malign-300 HPPA Options -march=architecture-type -mcaller-copies -mdisable-fpregs -mdisable-indexing -mfast-indirect-calls -mgas -mgnu-ld -mhp-ld -mfixed-range=register-range -mjump-in-delay -mlinker-opt -mlong-calls -mlong-load-store -mno-disable-fpregs -mno-disable-indexing -mno-fast-indirect-calls -mno-gas -mno-jump-in-delay -mno-long-load-store -mno-portable-runtime -mno-soft-float -mno-space-regs -msoft-float -mpa-risc-1-0 -mpa-risc-1-1 -mpa-risc-2-0 -mportable-runtime -mschedule=cpu-type -mspace-regs -msio -mwsio -munix=unix-std -nolibdld -static -threads IA-64 Options -mbig-endian -mlittle-endian -mgnu-as -mgnu-ld -mno-pic -mvolatile-asm-stop -mregister-names -msdata -mno-sdata -mconstant-gp -mauto-pic -mfused-madd -minline-float-divide-min-latency -minline-float-divide-max-throughput -mno-inline-float-divide -minline-int-divide-min-latency -minline-int-divide-max-throughput -mno-inline-int-divide -minline-sqrt-min-latency -minline-sqrt-max-throughput -mno-inline-sqrt -mdwarf2-asm -mearly-stop-bits -mfixed-range=register-range -mtls-size=tls-size -mtune=cpu- type -milp32 -mlp64 -msched-br-data-spec -msched-ar-data-spec -msched-control-spec -msched-br-in-data-spec -msched-ar-in-data-spec -msched-in-control-spec -msched-spec-ldc -msched-spec-control-ldc -msched-prefer-non-data-spec-insns -msched-prefer-non-control-spec-insns -msched-stop-bits-after-every-cycle -msched-count-spec-in-critical-path -msel-sched-dont-check-control-spec -msched-fp-mem-deps-zero-cost -msched-max-memory-insns-hard-limit -msched-max-memory-insns=max-insns LM32 Options -mbarrel-shift-enabled -mdivide-enabled -mmultiply-enabled -msign-extend-enabled -muser-enabled M32R/D Options -m32r2 -m32rx -m32r -mdebug -malign-loops -mno-align-loops -missue-rate=number -mbranch-cost=number -mmodel=code-size-model-type -msdata=sdata-type -mno-flush-func -mflush-func=name -mno-flush-trap -mflush-trap=number -G num M32C Options -mcpu=cpu -msim -memregs=number M680x0 Options -march=arch -mcpu=cpu -mtune=tune -m68000 -m68020 -m68020-40 -m68020-60 -m68030 -m68040 -m68060 -mcpu32 -m5200 -m5206e -m528x -m5307 -m5407 -mcfv4e -mbitfield -mno-bitfield -mc68000 -mc68020 -mnobitfield -mrtd -mno-rtd -mdiv -mno-div -mshort -mno-short -mhard-float -m68881 -msoft-float -mpcrel -malign-int -mstrict-align -msep-data -mno-sep-data -mshared-library-id=n -mid-shared-library -mno-id-shared-library -mxgot -mno-xgot -mlong-jump-table-offsets MCore Options -mhardlit -mno-hardlit -mdiv -mno-div -mrelax-immediates -mno-relax-immediates -mwide-bitfields -mno-wide-bitfields -m4byte-functions -mno-4byte-functions -mcallgraph-data -mno-callgraph-data -mslow-bytes -mno-slow-bytes -mno-lsim -mlittle-endian -mbig-endian -m210 -m340 -mstack-increment MeP Options -mabsdiff -mall-opts -maverage -mbased=n -mbitops -mc=n -mclip -mconfig=name -mcop -mcop32 -mcop64 -mivc2 -mdc -mdiv -meb -mel -mio-volatile -ml -mleadz -mm -mminmax -mmult -mno-opts -mrepeat -ms -msatur -msdram -msim -msimnovec -mtf -mtiny=n MicroBlaze Options -msoft-float -mhard-float -msmall-divides -mcpu=cpu -mmemcpy -mxl-soft-mul -mxl-soft-div -mxl-barrel-shift -mxl-pattern-compare -mxl-stack-check -mxl-gp-opt -mno-clearbss -mxl-multiply-high -mxl-float-convert -mxl-float-sqrt -mbig-endian -mlittle-endian -mxl-reorder -mxl-mode-app- model -mpic-data-is-text-relative MIPS Options -EL -EB -march=arch -mtune=arch -mips1 -mips2 -mips3 -mips4 -mips32 -mips32r2 -mips32r3 -mips32r5 -mips32r6 -mips64 -mips64r2 -mips64r3 -mips64r5 -mips64r6 -mips16 -mno-mips16 -mflip-mips16 -minterlink-compressed -mno-interlink-compressed -minterlink-mips16 -mno-interlink-mips16 -mabi=abi -mabicalls -mno-abicalls -mshared -mno-shared -mplt -mno-plt -mxgot -mno-xgot -mgp32 -mgp64 -mfp32 -mfpxx -mfp64 -mhard-float -msoft-float -mno-float -msingle-float -mdouble-float -modd-spreg -mno-odd-spreg -mabs=mode -mnan=encoding -mdsp -mno-dsp -mdspr2 -mno-dspr2 -mmcu -mmno-mcu -meva -mno-eva -mvirt -mno-virt -mxpa -mno-xpa -mcrc -mno-crc -mginv -mno-ginv -mmicromips -mno-micromips -mmsa -mno-msa -mloongson-mmi -mno-loongson-mmi -mloongson-ext -mno-loongson-ext -mloongson-ext2 -mno-loongson-ext2 -mfpu=fpu-type -msmartmips -mno-smartmips -mpaired-single -mno-paired-single -mdmx -mno-mdmx -mips3d -mno-mips3d -mmt -mno-mt -mllsc -mno-llsc -mlong64 -mlong32 -msym32 -mno-sym32 -Gnum -mlocal-sdata -mno-local-sdata -mextern-sdata -mno-extern-sdata -mgpopt -mno-gopt -membedded-data -mno-embedded-data -muninit-const-in-rodata -mno-uninit-const-in-rodata -mcode-readable=setting -msplit-addresses -mno-split-addresses -mexplicit-relocs -mno-explicit-relocs -mcheck-zero-division -mno-check-zero-division -mdivide-traps -mdivide-breaks -mload-store-pairs -mno-load-store-pairs -mmemcpy -mno-memcpy -mlong-calls -mno-long-calls -mmad -mno-mad -mimadd -mno-imadd -mfused-madd -mno-fused-madd -nocpp -mfix-24k -mno-fix-24k -mfix-r4000 -mno-fix-r4000 -mfix-r4400 -mno-fix-r4400 -mfix-r5900 -mno-fix-r5900 -mfix-r10000 -mno-fix-r10000 -mfix-rm7000 -mno-fix-rm7000 -mfix-vr4120 -mno-fix-vr4120 -mfix-vr4130 -mno-fix-vr4130 -mfix-sb1 -mno-fix-sb1 -mflush-func=func -mno-flush-func -mbranch-cost=num -mbranch-likely -mno-branch-likely -mcompact-branches=policy -mfp-exceptions -mno-fp-exceptions -mvr4130-align -mno-vr4130-align -msynci -mno-synci -mlxc1-sxc1 -mno-lxc1-sxc1 -mmadd4 -mno-madd4 -mrelax-pic-calls -mno-relax-pic-calls -mmcount-ra-address -mframe-header-opt -mno-frame-header-opt MMIX Options -mlibfuncs -mno-libfuncs -mepsilon -mno-epsilon -mabi=gnu -mabi=mmixware -mzero-extend -mknuthdiv -mtoplevel-symbols -melf -mbranch-predict -mno-branch-predict -mbase-addresses -mno-base-addresses -msingle-exit -mno-single-exit MN10300 Options -mmult-bug -mno-mult-bug -mno-am33 -mam33 -mam33-2 -mam34 -mtune=cpu-type -mreturn-pointer-on-d0 -mno-crt0 -mrelax -mliw -msetlb Moxie Options -meb -mel -mmul.x -mno-crt0 MSP430 Options -msim -masm-hex -mmcu= -mcpu= -mlarge -msmall -mrelax -mwarn-mcu -mcode-region= -mdata-region= -msilicon-errata= -msilicon-errata-warn= -mhwmult= -minrt NDS32 Options -mbig-endian -mlittle-endian -mreduced-regs -mfull-regs -mcmov -mno-cmov -mext-perf -mno-ext-perf -mext-perf2 -mno-ext-perf2 -mext-string -mno-ext-string -mv3push -mno-v3push -m16bit -mno-16bit -misr-vector-size=num -mcache-block-size=num -march=arch -mcmodel=code-model -mctor-dtor -mrelax Nios II Options -G num -mgpopt=option -mgpopt -mno-gpopt -mgprel-sec=regexp -mr0rel-sec=regexp -mel -meb -mno-bypass-cache -mbypass-cache -mno-cache-volatile -mcache-volatile -mno-fast-sw-div -mfast-sw-div -mhw-mul -mno-hw-mul -mhw-mulx -mno-hw-mulx -mno-hw-div -mhw-div -mcustom-insn=N -mno-custom-insn -mcustom-fpu-cfg=name -mhal -msmallc -msys-crt0=name -msys-lib=name -march=arch -mbmx -mno-bmx -mcdx -mno-cdx Nvidia PTX Options -m32 -m64 -mmainkernel -moptimize OpenRISC Options -mboard=name -mnewlib -mhard-mul -mhard-div -msoft-mul -msoft-div -mcmov -mror -msext -msfimm -mshftimm PDP-11 Options -mfpu -msoft-float -mac0 -mno-ac0 -m40 -m45 -m10 -mint32 -mno-int16 -mint16 -mno-int32 -msplit -munix-asm -mdec-asm -mgnu-asm -mlra picoChip Options -mae=ae_type -mvliw-lookahead=N -msymbol-as-address -mno-inefficient-warnings PowerPC Options See RS/6000 and PowerPC Options. RISC-V Options -mbranch-cost=N-instruction -mplt -mno-plt -mabi=ABI-string -mfdiv -mno-fdiv -mdiv -mno-div -march=ISA-string -mtune=processor-string -mpreferred-stack-boundary=num -msmall-data-limit=N-bytes -msave-restore -mno-save-restore -mstrict-align -mno-strict-align -mcmodel=medlow -mcmodel=medany -mexplicit-relocs -mno-explicit-relocs -mrelax -mno-relax -mriscv-attribute -mmo-riscv-attribute RL78 Options -msim -mmul=none -mmul=g13 -mmul=g14 -mallregs -mcpu=g10 -mcpu=g13 -mcpu=g14 -mg10 -mg13 -mg14 -m64bit-doubles -m32bit-doubles -msave-mduc-in-interrupts RS/6000 and PowerPC Options -mcpu=cpu-type -mtune=cpu-type -mcmodel=code-model -mpowerpc64 -maltivec -mno-altivec -mpowerpc-gpopt -mno-powerpc-gpopt -mpowerpc-gfxopt -mno-powerpc-gfxopt -mmfcrf -mno-mfcrf -mpopcntb -mno-popcntb -mpopcntd -mno-popcntd -mfprnd -mno-fprnd -mcmpb -mno-cmpb -mmfpgpr -mno-mfpgpr -mhard-dfp -mno-hard-dfp -mfull-toc -mminimal-toc -mno-fp-in-toc -mno-sum-in-toc -m64 -m32 -mxl-compat -mno-xl-compat -mpe -malign-power -malign-natural -msoft-float -mhard-float -mmultiple -mno-multiple -mupdate -mno-update -mavoid-indexed-addresses -mno-avoid-indexed-addresses -mfused-madd -mno-fused-madd -mbit-align -mno-bit-align -mstrict-align -mno-strict-align -mrelocatable -mno-relocatable -mrelocatable-lib -mno-relocatable-lib -mtoc -mno-toc -mlittle -mlittle-endian -mbig -mbig-endian -mdynamic-no-pic -mswdiv -msingle-pic-base -mprioritize-restricted-insns=priority -msched-costly-dep=dependence_type -minsert-sched-nops=scheme -mcall-aixdesc -mcall-eabi -mcall-freebsd -mcall-linux -mcall-netbsd -mcall-openbsd -mcall-sysv -mcall-sysv-eabi -mcall-sysv-noeabi -mtraceback=traceback_type -maix-struct-return -msvr4-struct-return -mabi=abi-type -msecure-plt -mbss-plt -mlongcall -mno-longcall -mpltseq -mno-pltseq -mblock-move-inline-limit=num -mblock-compare-inline-limit=num -mblock-compare-inline-loop-limit=num -mstring-compare-inline-limit=num -misel -mno-isel -mvrsave -mno-vrsave -mmulhw -mno-mulhw -mdlmzb -mno-dlmzb -mprototype -mno-prototype -msim -mmvme -mads -myellowknife -memb -msdata -msdata=opt -mreadonly-in-sdata -mvxworks -G num -mrecip -mrecip=opt -mno-recip -mrecip-precision -mno-recip-precision -mveclibabi=type -mfriz -mno-friz -mpointers-to-nested-functions -mno-pointers-to-nested-functions -msave-toc-indirect -mno-save-toc-indirect -mpower8-fusion -mno-mpower8-fusion -mpower8-vector -mno-power8-vector -mcrypto -mno-crypto -mhtm -mno-htm -mquad-memory -mno-quad-memory -mquad-memory-atomic -mno-quad-memory-atomic -mcompat-align-parm -mno-compat-align-parm -mfloat128 -mno-float128 -mfloat128-hardware -mno-float128-hardware -mgnu-attribute -mno-gnu-attribute -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset RX Options -m64bit-doubles -m32bit-doubles -fpu -nofpu -mcpu= -mbig-endian-data -mlittle-endian-data -msmall-data -msim -mno-sim -mas100-syntax -mno-as100-syntax -mrelax -mmax-constant-size= -mint-register= -mpid -mallow-string-insns -mno-allow-string-insns -mjsr -mno-warn-multiple-fast-interrupts -msave-acc-in-interrupts S/390 and zSeries Options -mtune=cpu-type -march=cpu-type -mhard-float -msoft-float -mhard-dfp -mno-hard-dfp -mlong-double-64 -mlong-double-128 -mbackchain -mno-backchain -mpacked-stack -mno-packed-stack -msmall-exec -mno-small-exec -mmvcle -mno-mvcle -m64 -m31 -mdebug -mno-debug -mesa -mzarch -mhtm -mvx -mzvector -mtpf-trace -mno-tpf-trace -mfused-madd -mno-fused-madd -mwarn-framesize -mwarn-dynamicstack -mstack-size -mstack-guard -mhotpatch=halfwords,halfwords Score Options -meb -mel -mnhwloop -muls -mmac -mscore5 -mscore5u -mscore7 -mscore7d SH Options -m1 -m2 -m2e -m2a-nofpu -m2a-single-only -m2a-single -m2a -m3 -m3e -m4-nofpu -m4-single-only -m4-single -m4 -m4a-nofpu -m4a-single-only -m4a-single -m4a -m4al -mb -ml -mdalign -mrelax -mbigtable -mfmovd -mrenesas -mno-renesas -mnomacsave -mieee -mno-ieee -mbitops -misize -minline-ic_invalidate -mpadstruct -mprefergot -musermode -multcost=number -mdiv=strategy -mdivsi3_libfunc=name -mfixed-range=register-range -maccumulate-outgoing-args -matomic-model=atomic-model -mbranch-cost=num -mzdcbranch -mno-zdcbranch -mcbranch-force-delay-slot -mfused-madd -mno-fused-madd -mfsca -mno-fsca -mfsrra -mno-fsrra -mpretend-cmove -mtas Solaris 2 Options -mclear-hwcap -mno-clear-hwcap -mimpure-text -mno-impure-text -pthreads SPARC Options -mcpu=cpu-type -mtune=cpu-type -mcmodel=code- model -mmemory-model=mem-model -m32 -m64 -mapp-regs -mno-app-regs -mfaster-structs -mno-faster-structs -mflat -mno-flat -mfpu -mno-fpu -mhard-float -msoft-float -mhard-quad-float -msoft-quad-float -mstack-bias -mno-stack-bias -mstd-struct-return -mno-std-struct-return -munaligned-doubles -mno-unaligned-doubles -muser-mode -mno-user-mode -mv8plus -mno-v8plus -mvis -mno-vis -mvis2 -mno-vis2 -mvis3 -mno-vis3 -mvis4 -mno-vis4 -mvis4b -mno-vis4b -mcbcond -mno-cbcond -mfmaf -mno-fmaf -mfsmuld -mno-fsmuld -mpopc -mno-popc -msubxc -mno-subxc -mfix-at697f -mfix-ut699 -mfix-ut700 -mfix-gr712rc -mlra -mno-lra SPU Options -mwarn-reloc -merror-reloc -msafe-dma -munsafe-dma -mbranch-hints -msmall-mem -mlarge-mem -mstdmain -mfixed-range=register-range -mea32 -mea64 -maddress-space-conversion -mno-address-space-conversion -mcache-size=cache-size -matomic-updates -mno-atomic-updates System V Options -Qy -Qn -YP,paths -Ym,dir TILE-Gx Options -mcpu=CPU -m32 -m64 -mbig-endian -mlittle-endian -mcmodel=code-model TILEPro Options -mcpu=cpu -m32 V850 Options -mlong-calls -mno-long-calls -mep -mno-ep -mprolog-function -mno-prolog-function -mspace -mtda=n -msda=n -mzda=n -mapp-regs -mno-app-regs -mdisable-callt -mno-disable-callt -mv850e2v3 -mv850e2 -mv850e1 -mv850es -mv850e -mv850 -mv850e3v5 -mloop -mrelax -mlong-jumps -msoft-float -mhard-float -mgcc-abi -mrh850-abi -mbig-switch VAX Options -mg -mgnu -munix Visium Options -mdebug -msim -mfpu -mno-fpu -mhard-float -msoft-float -mcpu=cpu-type -mtune=cpu-type -msv-mode -muser-mode VMS Options -mvms-return-codes -mdebug-main=prefix -mmalloc64 -mpointer-size=size VxWorks Options -mrtp -non-static -Bstatic -Bdynamic -Xbind-lazy -Xbind-now x86 Options -mtune=cpu-type -march=cpu-type -mtune-ctrl=feature-list -mdump-tune-features -mno-default -mfpmath=unit -masm=dialect -mno-fancy-math-387 -mno-fp-ret-in-387 -m80387 -mhard-float -msoft-float -mno-wide-multiply -mrtd -malign-double -mpreferred-stack-boundary=num -mincoming-stack-boundary=num -mcld -mcx16 -msahf -mmovbe -mcrc32 -mrecip -mrecip=opt -mvzeroupper -mprefer-avx128 -mprefer-vector-width=opt -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4 -mavx -mavx2 -mavx512f -mavx512pf -mavx512er -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -msha -maes -mpclmul -mfsgsbase -mrdrnd -mf16c -mfma -mpconfig -mwbnoinvd -mptwrite -mprefetchwt1 -mclflushopt -mclwb -mxsavec -mxsaves -msse4a -m3dnow -m3dnowa -mpopcnt -mabm -mbmi -mtbm -mfma4 -mxop -madx -mlzcnt -mbmi2 -mfxsr -mxsave -mxsaveopt -mrtm -mhle -mlwp -mmwaitx -mclzero -mpku -mthreads -mgfni -mvaes -mwaitpkg -mshstk -mmanual-endbr -mforce-indirect-call -mavx512vbmi2 -mvpclmulqdq -mavx512bitalg -mmovdiri -mmovdir64b -mavx512vpopcntdq -mavx5124fmaps -mavx512vnni -mavx5124vnniw -mprfchw -mrdpid -mrdseed -msgx -mcldemote -mms-bitfields -mno-align-stringops -minline-all-stringops -minline-stringops-dynamically -mstringop-strategy=alg -mmemcpy-strategy=strategy -mmemset-strategy=strategy -mpush-args -maccumulate-outgoing-args -m128bit-long-double -m96bit-long-double -mlong-double-64 -mlong-double-80 -mlong-double-128 -mregparm=num -msseregparm -mveclibabi=type -mvect8-ret-in-mem -mpc32 -mpc64 -mpc80 -mstackrealign -momit-leaf-frame-pointer -mno-red-zone -mno-tls-direct-seg-refs -mcmodel=code-model -mabi=name -maddress-mode=mode -m32 -m64 -mx32 -m16 -miamcu -mlarge-data-threshold=num -msse2avx -mfentry -mrecord-mcount -mnop-mcount -m8bit-idiv -minstrument-return=type -mfentry-name=name -mfentry-section=name -mavx256-split-unaligned-load -mavx256-split-unaligned-store -malign-data=type -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset -mstack-protector-guard-symbol=symbol -mgeneral-regs-only -mcall-ms2sysv-xlogues -mindirect-branch=choice -mfunction-return=choice -mindirect-branch-register x86 Windows Options -mconsole -mcygwin -mno-cygwin -mdll -mnop-fun-dllimport -mthread -municode -mwin32 -mwindows -fno-set-stack-executable Xstormy16 Options -msim Xtensa Options -mconst16 -mno-const16 -mfused-madd -mno-fused-madd -mforce-no-pic -mserialize-volatile -mno-serialize-volatile -mtext-section-literals -mno-text-section-literals -mauto-litpools -mno-auto-litpools -mtarget-align -mno-target-align -mlongcalls -mno-longcalls zSeries Options See S/390 and zSeries Options. Options Controlling the Kind of Output Compilation can involve up to four stages: preprocessing, compilation proper, assembly and linking, always in that order. GCC is capable of preprocessing and compiling several files either into several assembler input files, or into one assembler input file; then each assembler input file produces an object file, and linking combines all the object files (those newly compiled, and those specified as input) into an executable file. For any given input file, the file name suffix determines what kind of compilation is done: file.c C source code that must be preprocessed. file.i C source code that should not be preprocessed. file.ii C++ source code that should not be preprocessed. file.m Objective-C source code. Note that you must link with the libobjc library to make an Objective-C program work. file.mi Objective-C source code that should not be preprocessed. file.mm file.M Objective-C++ source code. Note that you must link with the libobjc library to make an Objective-C++ program work. Note that .M refers to a literal capital M. file.mii Objective-C++ source code that should not be preprocessed. file.h C, C++, Objective-C or Objective-C++ header file to be turned into a precompiled header (default), or C, C++ header file to be turned into an Ada spec (via the -fdump-ada-spec switch). file.cc file.cp file.cxx file.cpp file.CPP file.c++ file.C C++ source code that must be preprocessed. Note that in .cxx, the last two letters must both be literally x. Likewise, .C refers to a literal capital C. file.mm file.M Objective-C++ source code that must be preprocessed. file.mii Objective-C++ source code that should not be preprocessed. file.hh file.H file.hp file.hxx file.hpp file.HPP file.h++ file.tcc C++ header file to be turned into a precompiled header or Ada spec. file.f file.for file.ftn Fixed form Fortran source code that should not be preprocessed. file.F file.FOR file.fpp file.FPP file.FTN Fixed form Fortran source code that must be preprocessed (with the traditional preprocessor). file.f90 file.f95 file.f03 file.f08 Free form Fortran source code that should not be preprocessed. file.F90 file.F95 file.F03 file.F08 Free form Fortran source code that must be preprocessed (with the traditional preprocessor). file.go Go source code. file.brig BRIG files (binary representation of HSAIL). file.d D source code. file.di D interface file. file.dd D documentation code (Ddoc). file.ads Ada source code file that contains a library unit declaration (a declaration of a package, subprogram, or generic, or a generic instantiation), or a library unit renaming declaration (a package, generic, or subprogram renaming declaration). Such files are also called specs. file.adb Ada source code file containing a library unit body (a subprogram or package body). Such files are also called bodies. file.s Assembler code. file.S file.sx Assembler code that must be preprocessed. other An object file to be fed straight into linking. Any file name with no recognized suffix is treated this way. You can specify the input language explicitly with the -x option: -x language Specify explicitly the language for the following input files (rather than letting the compiler choose a default based on the file name suffix). This option applies to all following input files until the next -x option. Possible values for language are: c c-header cpp-output c++ c++-header c++-cpp-output objective-c objective-c-header objective-c-cpp-output objective-c++ objective-c++-header objective-c++-cpp-output assembler assembler-with-cpp ada d f77 f77-cpp-input f95 f95-cpp-input go brig -x none Turn off any specification of a language, so that subsequent files are handled according to their file name suffixes (as they are if -x has not been used at all). If you only want some of the stages of compilation, you can use -x (or filename suffixes) to tell gcc where to start, and one of the options -c, -S, or -E to say where gcc is to stop. Note that some combinations (for example, -x cpp-output -E) instruct gcc to do nothing at all. -c Compile or assemble the source files, but do not link. The linking stage simply is not done. The ultimate output is in the form of an object file for each source file. By default, the object file name for a source file is made by replacing the suffix .c, .i, .s, etc., with .o. Unrecognized input files, not requiring compilation or assembly, are ignored. -S Stop after the stage of compilation proper; do not assemble. The output is in the form of an assembler code file for each non-assembler input file specified. By default, the assembler file name for a source file is made by replacing the suffix .c, .i, etc., with .s. Input files that don't require compilation are ignored. -E Stop after the preprocessing stage; do not run the compiler proper. The output is in the form of preprocessed source code, which is sent to the standard output. Input files that don't require preprocessing are ignored. -o file Place output in file file. This applies to whatever sort of output is being produced, whether it be an executable file, an object file, an assembler file or preprocessed C code. If -o is not specified, the default is to put an executable file in a.out, the object file for source.suffix in source.o, its assembler file in source.s, a precompiled header file in source.suffix.gch, and all preprocessed C source on standard output. -v Print (on standard error output) the commands executed to run the stages of compilation. Also print the version number of the compiler driver program and of the preprocessor and the compiler proper. -### Like -v except the commands are not executed and arguments are quoted unless they contain only alphanumeric characters or "./-_". This is useful for shell scripts to capture the driver-generated command lines. --help Print (on the standard output) a description of the command- line options understood by gcc. If the -v option is also specified then --help is also passed on to the various processes invoked by gcc, so that they can display the command-line options they accept. If the -Wextra option has also been specified (prior to the --help option), then command-line options that have no documentation associated with them are also displayed. --target-help Print (on the standard output) a description of target- specific command-line options for each tool. For some targets extra target-specific information may also be printed. --help={class|[^]qualifier}[,...] Print (on the standard output) a description of the command- line options understood by the compiler that fit into all specified classes and qualifiers. These are the supported classes: optimizers Display all of the optimization options supported by the compiler. warnings Display all of the options controlling warning messages produced by the compiler. target Display target-specific options. Unlike the --target-help option however, target-specific options of the linker and assembler are not displayed. This is because those tools do not currently support the extended --help= syntax. params Display the values recognized by the --param option. language Display the options supported for language, where language is the name of one of the languages supported in this version of GCC. common Display the options that are common to all languages. These are the supported qualifiers: undocumented Display only those options that are undocumented. joined Display options taking an argument that appears after an equal sign in the same continuous piece of text, such as: --help=target. separate Display options taking an argument that appears as a separate word following the original option, such as: -o output-file. Thus for example to display all the undocumented target- specific switches supported by the compiler, use: --help=target,undocumented The sense of a qualifier can be inverted by prefixing it with the ^ character, so for example to display all binary warning options (i.e., ones that are either on or off and that do not take an argument) that have a description, use: --help=warnings,^joined,^undocumented The argument to --help= should not consist solely of inverted qualifiers. Combining several classes is possible, although this usually restricts the output so much that there is nothing to display. One case where it does work, however, is when one of the classes is target. For example, to display all the target-specific optimization options, use: --help=target,optimizers The --help= option can be repeated on the command line. Each successive use displays its requested class of options, skipping those that have already been displayed. If --help is also specified anywhere on the command line then this takes precedence over any --help= option. If the -Q option appears on the command line before the --help= option, then the descriptive text displayed by --help= is changed. Instead of describing the displayed options, an indication is given as to whether the option is enabled, disabled or set to a specific value (assuming that the compiler knows this at the point where the --help= option is used). Here is a truncated example from the ARM port of gcc: % gcc -Q -mabi=2 --help=target -c The following options are target specific: -mabi= 2 -mabort-on-noreturn [disabled] -mapcs [disabled] The output is sensitive to the effects of previous command- line options, so for example it is possible to find out which optimizations are enabled at -O2 by using: -Q -O2 --help=optimizers Alternatively you can discover which binary optimizations are enabled by -O3 by using: gcc -c -Q -O3 --help=optimizers > /tmp/O3-opts gcc -c -Q -O2 --help=optimizers > /tmp/O2-opts diff /tmp/O2-opts /tmp/O3-opts | grep enabled --version Display the version number and copyrights of the invoked GCC. -pass-exit-codes Normally the gcc program exits with the code of 1 if any phase of the compiler returns a non-success return code. If you specify -pass-exit-codes, the gcc program instead returns with the numerically highest error produced by any phase returning an error indication. The C, C++, and Fortran front ends return 4 if an internal compiler error is encountered. -pipe Use pipes rather than temporary files for communication between the various stages of compilation. This fails to work on some systems where the assembler is unable to read from a pipe; but the GNU assembler has no trouble. -specs=file Process file after the compiler reads in the standard specs file, in order to override the defaults which the gcc driver program uses when determining what switches to pass to cc1, cc1plus, as, ld, etc. More than one -specs=file can be specified on the command line, and they are processed in order, from left to right. -wrapper Invoke all subcommands under a wrapper program. The name of the wrapper program and its parameters are passed as a comma separated list. gcc -c t.c -wrapper gdb,--args This invokes all subprograms of gcc under gdb --args, thus the invocation of cc1 is gdb --args cc1 .... -ffile-prefix-map=old=new When compiling files residing in directory old, record any references to them in the result of the compilation as if the files resided in directory new instead. Specifying this option is equivalent to specifying all the individual -f*-prefix-map options. This can be used to make reproducible builds that are location independent. See also -fmacro-prefix-map and -fdebug-prefix-map. -fplugin=name.so Load the plugin code in file name.so, assumed to be a shared object to be dlopen'd by the compiler. The base name of the shared object file is used to identify the plugin for the purposes of argument parsing (See -fplugin-arg-name-key=value below). Each plugin should define the callback functions specified in the Plugins API. -fplugin-arg-name-key=value Define an argument called key with a value of value for the plugin called name. -fdump-ada-spec[-slim] For C and C++ source and include files, generate corresponding Ada specs. -fada-spec-parent=unit In conjunction with -fdump-ada-spec[-slim] above, generate Ada specs as child units of parent unit. -fdump-go-spec=file For input files in any language, generate corresponding Go declarations in file. This generates Go "const", "type", "var", and "func" declarations which may be a useful way to start writing a Go interface to code written in some other language. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. Compiling C++ Programs C++ source files conventionally use one of the suffixes .C, .cc, .cpp, .CPP, .c++, .cp, or .cxx; C++ header files often use .hh, .hpp, .H, or (for shared template code) .tcc; and preprocessed C++ files use the suffix .ii. GCC recognizes files with these names and compiles them as C++ programs even if you call the compiler the same way as for compiling C programs (usually with the name gcc). However, the use of gcc does not add the C++ library. g++ is a program that calls GCC and automatically specifies linking against the C++ library. It treats .c, .h and .i files as C++ source files instead of C source files unless -x is used. This program is also useful when precompiling a C header file with a .h extension for use in C++ compilations. On many systems, g++ is also installed with the name c++. When you compile C++ programs, you may specify many of the same command-line options that you use for compiling programs in any language; or command-line options meaningful for C and related languages; or options that are meaningful only for C++ programs. Options Controlling C Dialect The following options control the dialect of C (or languages derived from C, such as C++, Objective-C and Objective-C++) that the compiler accepts: -ansi In C mode, this is equivalent to -std=c90. In C++ mode, it is equivalent to -std=c++98. This turns off certain features of GCC that are incompatible with ISO C90 (when compiling C code), or of standard C++ (when compiling C++ code), such as the "asm" and "typeof" keywords, and predefined macros such as "unix" and "vax" that identify the type of system you are using. It also enables the undesirable and rarely used ISO trigraph feature. For the C compiler, it disables recognition of C++ style // comments as well as the "inline" keyword. The alternate keywords "__asm__", "__extension__", "__inline__" and "__typeof__" continue to work despite -ansi. You would not want to use them in an ISO C program, of course, but it is useful to put them in header files that might be included in compilations done with -ansi. Alternate predefined macros such as "__unix__" and "__vax__" are also available, with or without -ansi. The -ansi option does not cause non-ISO programs to be rejected gratuitously. For that, -Wpedantic is required in addition to -ansi. The macro "__STRICT_ANSI__" is predefined when the -ansi option is used. Some header files may notice this macro and refrain from declaring certain functions or defining certain macros that the ISO standard doesn't call for; this is to avoid interfering with any programs that might use these names for other things. Functions that are normally built in but do not have semantics defined by ISO C (such as "alloca" and "ffs") are not built-in functions when -ansi is used. -std= Determine the language standard. This option is currently only supported when compiling C or C++. The compiler can accept several base standards, such as c90 or c++98, and GNU dialects of those standards, such as gnu90 or gnu++98. When a base standard is specified, the compiler accepts all programs following that standard plus those using GNU extensions that do not contradict it. For example, -std=c90 turns off certain features of GCC that are incompatible with ISO C90, such as the "asm" and "typeof" keywords, but not other GNU extensions that do not have a meaning in ISO C90, such as omitting the middle term of a "?:" expression. On the other hand, when a GNU dialect of a standard is specified, all features supported by the compiler are enabled, even when those features change the meaning of the base standard. As a result, some strict-conforming programs may be rejected. The particular standard is used by -Wpedantic to identify which features are GNU extensions given that version of the standard. For example -std=gnu90 -Wpedantic warns about C++ style // comments, while -std=gnu99 -Wpedantic does not. A value for this option must be provided; possible values are c90 c89 iso9899:1990 Support all ISO C90 programs (certain GNU extensions that conflict with ISO C90 are disabled). Same as -ansi for C code. iso9899:199409 ISO C90 as modified in amendment 1. c99 c9x iso9899:1999 iso9899:199x ISO C99. This standard is substantially completely supported, modulo bugs and floating-point issues (mainly but not entirely relating to optional C99 features from Annexes F and G). See <http://gcc.gnu.org/c99status.html > for more information. The names c9x and iso9899:199x are deprecated. c11 c1x iso9899:2011 ISO C11, the 2011 revision of the ISO C standard. This standard is substantially completely supported, modulo bugs, floating-point issues (mainly but not entirely relating to optional C11 features from Annexes F and G) and the optional Annexes K (Bounds-checking interfaces) and L (Analyzability). The name c1x is deprecated. c17 c18 iso9899:2017 iso9899:2018 ISO C17, the 2017 revision of the ISO C standard (published in 2018). This standard is same as C11 except for corrections of defects (all of which are also applied with -std=c11) and a new value of "__STDC_VERSION__", and so is supported to the same extent as C11. c2x The next version of the ISO C standard, still under development. The support for this version is experimental and incomplete. gnu90 gnu89 GNU dialect of ISO C90 (including some C99 features). gnu99 gnu9x GNU dialect of ISO C99. The name gnu9x is deprecated. gnu11 gnu1x GNU dialect of ISO C11. The name gnu1x is deprecated. gnu17 gnu18 GNU dialect of ISO C17. This is the default for C code. gnu2x The next version of the ISO C standard, still under development, plus GNU extensions. The support for this version is experimental and incomplete. c++98 c++03 The 1998 ISO C++ standard plus the 2003 technical corrigendum and some additional defect reports. Same as -ansi for C++ code. gnu++98 gnu++03 GNU dialect of -std=c++98. c++11 c++0x The 2011 ISO C++ standard plus amendments. The name c++0x is deprecated. gnu++11 gnu++0x GNU dialect of -std=c++11. The name gnu++0x is deprecated. c++14 c++1y The 2014 ISO C++ standard plus amendments. The name c++1y is deprecated. gnu++14 gnu++1y GNU dialect of -std=c++14. This is the default for C++ code. The name gnu++1y is deprecated. c++17 c++1z The 2017 ISO C++ standard plus amendments. The name c++1z is deprecated. gnu++17 gnu++1z GNU dialect of -std=c++17. The name gnu++1z is deprecated. c++2a The next revision of the ISO C++ standard, tentatively planned for 2020. Support is highly experimental, and will almost certainly change in incompatible ways in future releases. gnu++2a GNU dialect of -std=c++2a. Support is highly experimental, and will almost certainly change in incompatible ways in future releases. -fgnu89-inline The option -fgnu89-inline tells GCC to use the traditional GNU semantics for "inline" functions when in C99 mode. Using this option is roughly equivalent to adding the "gnu_inline" function attribute to all inline functions. The option -fno-gnu89-inline explicitly tells GCC to use the C99 semantics for "inline" when in C99 or gnu99 mode (i.e., it specifies the default behavior). This option is not supported in -std=c90 or -std=gnu90 mode. The preprocessor macros "__GNUC_GNU_INLINE__" and "__GNUC_STDC_INLINE__" may be used to check which semantics are in effect for "inline" functions. -fpermitted-flt-eval-methods=style ISO/IEC TS 18661-3 defines new permissible values for "FLT_EVAL_METHOD" that indicate that operations and constants with a semantic type that is an interchange or extended format should be evaluated to the precision and range of that type. These new values are a superset of those permitted under C99/C11, which does not specify the meaning of other positive values of "FLT_EVAL_METHOD". As such, code conforming to C11 may not have been written expecting the possibility of the new values. -fpermitted-flt-eval-methods specifies whether the compiler should allow only the values of "FLT_EVAL_METHOD" specified in C99/C11, or the extended set of values specified in ISO/IEC TS 18661-3. style is either "c11" or "ts-18661-3" as appropriate. The default when in a standards compliant mode (-std=c11 or similar) is -fpermitted-flt-eval-methods=c11. The default when in a GNU dialect (-std=gnu11 or similar) is -fpermitted-flt-eval-methods=ts-18661-3. -aux-info filename Output to the given filename prototyped declarations for all functions declared and/or defined in a translation unit, including those in header files. This option is silently ignored in any language other than C. Besides declarations, the file indicates, in comments, the origin of each declaration (source file and line), whether the declaration was implicit, prototyped or unprototyped (I, N for new or O for old, respectively, in the first character after the line number and the colon), and whether it came from a declaration or a definition (C or F, respectively, in the following character). In the case of function definitions, a K&R-style list of arguments followed by their declarations is also provided, inside comments, after the declaration. -fallow-parameterless-variadic-functions Accept variadic functions without named parameters. Although it is possible to define such a function, this is not very useful as it is not possible to read the arguments. This is only supported for C as this construct is allowed by C++. -fno-asm Do not recognize "asm", "inline" or "typeof" as a keyword, so that code can use these words as identifiers. You can use the keywords "__asm__", "__inline__" and "__typeof__" instead. -ansi implies -fno-asm. In C++, this switch only affects the "typeof" keyword, since "asm" and "inline" are standard keywords. You may want to use the -fno-gnu-keywords flag instead, which has the same effect. In C99 mode (-std=c99 or -std=gnu99), this switch only affects the "asm" and "typeof" keywords, since "inline" is a standard keyword in ISO C99. -fno-builtin -fno-builtin-function Don't recognize built-in functions that do not begin with __builtin_ as prefix. GCC normally generates special code to handle certain built- in functions more efficiently; for instance, calls to "alloca" may become single instructions which adjust the stack directly, and calls to "memcpy" may become inline copy loops. The resulting code is often both smaller and faster, but since the function calls no longer appear as such, you cannot set a breakpoint on those calls, nor can you change the behavior of the functions by linking with a different library. In addition, when a function is recognized as a built-in function, GCC may use information about that function to warn about problems with calls to that function, or to generate more efficient code, even if the resulting code still contains calls to that function. For example, warnings are given with -Wformat for bad calls to "printf" when "printf" is built in and "strlen" is known not to modify global memory. With the -fno-builtin-function option only the built-in function function is disabled. function must not begin with __builtin_. If a function is named that is not built-in in this version of GCC, this option is ignored. There is no corresponding -fbuiltin-function option; if you wish to enable built-in functions selectively when using -fno-builtin or -ffreestanding, you may define macros such as: #define abs(n) __builtin_abs ((n)) #define strcpy(d, s) __builtin_strcpy ((d), (s)) -fgimple Enable parsing of function definitions marked with "__GIMPLE". This is an experimental feature that allows unit testing of GIMPLE passes. -fhosted Assert that compilation targets a hosted environment. This implies -fbuiltin. A hosted environment is one in which the entire standard library is available, and in which "main" has a return type of "int". Examples are nearly everything except a kernel. This is equivalent to -fno-freestanding. -ffreestanding Assert that compilation targets a freestanding environment. This implies -fno-builtin. A freestanding environment is one in which the standard library may not exist, and program startup may not necessarily be at "main". The most obvious example is an OS kernel. This is equivalent to -fno-hosted. -fopenacc Enable handling of OpenACC directives "#pragma acc" in C/C++ and "!$acc" in Fortran. When -fopenacc is specified, the compiler generates accelerated code according to the OpenACC Application Programming Interface v2.0 <https://www.openacc.org >. This option implies -pthread, and thus is only supported on targets that have support for -pthread. -fopenacc-dim=geom Specify default compute dimensions for parallel offload regions that do not explicitly specify. The geom value is a triple of ':'-separated sizes, in order 'gang', 'worker' and, 'vector'. A size can be omitted, to use a target-specific default value. -fopenmp Enable handling of OpenMP directives "#pragma omp" in C/C++ and "!$omp" in Fortran. When -fopenmp is specified, the compiler generates parallel code according to the OpenMP Application Program Interface v4.5 <https://www.openmp.org >. This option implies -pthread, and thus is only supported on targets that have support for -pthread. -fopenmp implies -fopenmp-simd. -fopenmp-simd Enable handling of OpenMP's SIMD directives with "#pragma omp" in C/C++ and "!$omp" in Fortran. Other OpenMP directives are ignored. -fgnu-tm When the option -fgnu-tm is specified, the compiler generates code for the Linux variant of Intel's current Transactional Memory ABI specification document (Revision 1.1, May 6 2009). This is an experimental feature whose interface may change in future versions of GCC, as the official specification changes. Please note that not all architectures are supported for this feature. For more information on GCC's support for transactional memory, Note that the transactional memory feature is not supported with non-call exceptions (-fnon-call-exceptions). -fms-extensions Accept some non-standard constructs used in Microsoft header files. In C++ code, this allows member names in structures to be similar to previous types declarations. typedef int UOW; struct ABC { UOW UOW; }; Some cases of unnamed fields in structures and unions are only accepted with this option. Note that this option is off for all targets but x86 targets using ms-abi. -fplan9-extensions Accept some non-standard constructs used in Plan 9 code. This enables -fms-extensions, permits passing pointers to structures with anonymous fields to functions that expect pointers to elements of the type of the field, and permits referring to anonymous fields declared using a typedef. This is only supported for C, not C++. -fcond-mismatch Allow conditional expressions with mismatched types in the second and third arguments. The value of such an expression is void. This option is not supported for C++. -flax-vector-conversions Allow implicit conversions between vectors with differing numbers of elements and/or incompatible element types. This option should not be used for new code. -funsigned-char Let the type "char" be unsigned, like "unsigned char". Each kind of machine has a default for what "char" should be. It is either like "unsigned char" by default or like "signed char" by default. Ideally, a portable program should always use "signed char" or "unsigned char" when it depends on the signedness of an object. But many programs have been written to use plain "char" and expect it to be signed, or expect it to be unsigned, depending on the machines they were written for. This option, and its inverse, let you make such a program work with the opposite default. The type "char" is always a distinct type from each of "signed char" or "unsigned char", even though its behavior is always just like one of those two. -fsigned-char Let the type "char" be signed, like "signed char". Note that this is equivalent to -fno-unsigned-char, which is the negative form of -funsigned-char. Likewise, the option -fno-signed-char is equivalent to -funsigned-char. -fsigned-bitfields -funsigned-bitfields -fno-signed-bitfields -fno-unsigned-bitfields These options control whether a bit-field is signed or unsigned, when the declaration does not use either "signed" or "unsigned". By default, such a bit-field is signed, because this is consistent: the basic integer types such as "int" are signed types. -fsso-struct=endianness Set the default scalar storage order of structures and unions to the specified endianness. The accepted values are big- endian, little-endian and native for the native endianness of the target (the default). This option is not supported for C++. Warning: the -fsso-struct switch causes GCC to generate code that is not binary compatible with code generated without it if the specified endianness is not the native endianness of the target. Options Controlling C++ Dialect This section describes the command-line options that are only meaningful for C++ programs. You can also use most of the GNU compiler options regardless of what language your program is in. For example, you might compile a file firstClass.C like this: g++ -g -fstrict-enums -O -c firstClass.C In this example, only -fstrict-enums is an option meant only for C++ programs; you can use the other options with any language supported by GCC. Some options for compiling C programs, such as -std, are also relevant for C++ programs. Here is a list of options that are only for compiling C++ programs: -fabi-version=n Use version n of the C++ ABI. The default is version 0. Version 0 refers to the version conforming most closely to the C++ ABI specification. Therefore, the ABI obtained using version 0 will change in different versions of G++ as ABI bugs are fixed. Version 1 is the version of the C++ ABI that first appeared in G++ 3.2. Version 2 is the version of the C++ ABI that first appeared in G++ 3.4, and was the default through G++ 4.9. Version 3 corrects an error in mangling a constant address as a template argument. Version 4, which first appeared in G++ 4.5, implements a standard mangling for vector types. Version 5, which first appeared in G++ 4.6, corrects the mangling of attribute const/volatile on function pointer types, decltype of a plain decl, and use of a function parameter in the declaration of another parameter. Version 6, which first appeared in G++ 4.7, corrects the promotion behavior of C++11 scoped enums and the mangling of template argument packs, const/static_cast, prefix ++ and --, and a class scope function used as a template argument. Version 7, which first appeared in G++ 4.8, that treats nullptr_t as a builtin type and corrects the mangling of lambdas in default argument scope. Version 8, which first appeared in G++ 4.9, corrects the substitution behavior of function types with function-cv- qualifiers. Version 9, which first appeared in G++ 5.2, corrects the alignment of "nullptr_t". Version 10, which first appeared in G++ 6.1, adds mangling of attributes that affect type identity, such as ia32 calling convention attributes (e.g. stdcall). Version 11, which first appeared in G++ 7, corrects the mangling of sizeof... expressions and operator names. For multiple entities with the same name within a function, that are declared in different scopes, the mangling now changes starting with the twelfth occurrence. It also implies -fnew-inheriting-ctors. Version 12, which first appeared in G++ 8, corrects the calling conventions for empty classes on the x86_64 target and for classes with only deleted copy/move constructors. It accidentally changes the calling convention for classes with a deleted copy constructor and a trivial move constructor. Version 13, which first appeared in G++ 8.2, fixes the accidental change in version 12. See also -Wabi. -fabi-compat-version=n On targets that support strong aliases, G++ works around mangling changes by creating an alias with the correct mangled name when defining a symbol with an incorrect mangled name. This switch specifies which ABI version to use for the alias. With -fabi-version=0 (the default), this defaults to 11 (GCC 7 compatibility). If another ABI version is explicitly selected, this defaults to 0. For compatibility with GCC versions 3.2 through 4.9, use -fabi-compat-version=2. If this option is not provided but -Wabi=n is, that version is used for compatibility aliases. If this option is provided along with -Wabi (without the version), the version from this option is used for the warning. -fno-access-control Turn off all access checking. This switch is mainly useful for working around bugs in the access control code. -faligned-new Enable support for C++17 "new" of types that require more alignment than "void* ::operator new(std::size_t)" provides. A numeric argument such as "-faligned-new=32" can be used to specify how much alignment (in bytes) is provided by that function, but few users will need to override the default of "alignof(std::max_align_t)". This flag is enabled by default for -std=c++17. -fchar8_t -fno-char8_t Enable support for "char8_t" as adopted for C++2a. This includes the addition of a new "char8_t" fundamental type, changes to the types of UTF-8 string and character literals, new signatures for user-defined literals, associated standard library updates, and new "__cpp_char8_t" and "__cpp_lib_char8_t" feature test macros. This option enables functions to be overloaded for ordinary and UTF-8 strings: int f(const char *); // #1 int f(const char8_t *); // #2 int v1 = f("text"); // Calls #1 int v2 = f(u8"text"); // Calls #2 and introduces new signatures for user-defined literals: int operator""_udl1(char8_t); int v3 = u8'x'_udl1; int operator""_udl2(const char8_t*, std::size_t); int v4 = u8"text"_udl2; template<typename T, T...> int operator""_udl3(); int v5 = u8"text"_udl3; The change to the types of UTF-8 string and character literals introduces incompatibilities with ISO C++11 and later standards. For example, the following code is well- formed under ISO C++11, but is ill-formed when -fchar8_t is specified. char ca[] = u8"xx"; // error: char-array initialized from wide // string const char *cp = u8"xx";// error: invalid conversion from // `const char8_t*' to `const char*' int f(const char*); auto v = f(u8"xx"); // error: invalid conversion from // `const char8_t*' to `const char*' std::string s{u8"xx"}; // error: no matching function for call to // `std::basic_string<char>::basic_string()' using namespace std::literals; s = u8"xx"s; // error: conversion from // `basic_string<char8_t>' to non-scalar // type `basic_string<char>' requested -fcheck-new Check that the pointer returned by "operator new" is non-null before attempting to modify the storage allocated. This check is normally unnecessary because the C++ standard specifies that "operator new" only returns 0 if it is declared "throw()", in which case the compiler always checks the return value even without this option. In all other cases, when "operator new" has a non-empty exception specification, memory exhaustion is signalled by throwing "std::bad_alloc". See also new (nothrow). -fconcepts Enable support for the C++ Extensions for Concepts Technical Specification, ISO 19217 (2015), which allows code like template <class T> concept bool Addable = requires (T t) { t + t; }; template <Addable T> T add (T a, T b) { return a + b; } -fconstexpr-depth=n Set the maximum nested evaluation depth for C++11 constexpr functions to n. A limit is needed to detect endless recursion during constant expression evaluation. The minimum specified by the standard is 512. -fconstexpr-loop-limit=n Set the maximum number of iterations for a loop in C++14 constexpr functions to n. A limit is needed to detect infinite loops during constant expression evaluation. The default is 262144 (1<<18). -fconstexpr-ops-limit=n Set the maximum number of operations during a single constexpr evaluation. Even when number of iterations of a single loop is limited with the above limit, if there are several nested loops and each of them has many iterations but still smaller than the above limit, or if in a body of some loop or even outside of a loop too many expressions need to be evaluated, the resulting constexpr evaluation might take too long. The default is 33554432 (1<<25). -fdeduce-init-list Enable deduction of a template type parameter as "std::initializer_list" from a brace-enclosed initializer list, i.e. template <class T> auto forward(T t) -> decltype (realfn (t)) { return realfn (t); } void f() { forward({1,2}); // call forward<std::initializer_list<int>> } This deduction was implemented as a possible extension to the originally proposed semantics for the C++11 standard, but was not part of the final standard, so it is disabled by default. This option is deprecated, and may be removed in a future version of G++. -fno-elide-constructors The C++ standard allows an implementation to omit creating a temporary that is only used to initialize another object of the same type. Specifying this option disables that optimization, and forces G++ to call the copy constructor in all cases. This option also causes G++ to call trivial member functions which otherwise would be expanded inline. In C++17, the compiler is required to omit these temporaries, but this option still affects trivial member functions. -fno-enforce-eh-specs Don't generate code to check for violation of exception specifications at run time. This option violates the C++ standard, but may be useful for reducing code size in production builds, much like defining "NDEBUG". This does not give user code permission to throw exceptions in violation of the exception specifications; the compiler still optimizes based on the specifications, so throwing an unexpected exception results in undefined behavior at run time. -fextern-tls-init -fno-extern-tls-init The C++11 and OpenMP standards allow "thread_local" and "threadprivate" variables to have dynamic (runtime) initialization. To support this, any use of such a variable goes through a wrapper function that performs any necessary initialization. When the use and definition of the variable are in the same translation unit, this overhead can be optimized away, but when the use is in a different translation unit there is significant overhead even if the variable doesn't actually need dynamic initialization. If the programmer can be sure that no use of the variable in a non-defining TU needs to trigger dynamic initialization (either because the variable is statically initialized, or a use of the variable in the defining TU will be executed before any uses in another TU), they can avoid this overhead with the -fno-extern-tls-init option. On targets that support symbol aliases, the default is -fextern-tls-init. On targets that do not support symbol aliases, the default is -fno-extern-tls-init. -fno-gnu-keywords Do not recognize "typeof" as a keyword, so that code can use this word as an identifier. You can use the keyword "__typeof__" instead. This option is implied by the strict ISO C++ dialects: -ansi, -std=c++98, -std=c++11, etc. -fno-implicit-templates Never emit code for non-inline templates that are instantiated implicitly (i.e. by use); only emit code for explicit instantiations. If you use this option, you must take care to structure your code to include all the necessary explicit instantiations to avoid getting undefined symbols at link time. -fno-implicit-inline-templates Don't emit code for implicit instantiations of inline templates, either. The default is to handle inlines differently so that compiles with and without optimization need the same set of explicit instantiations. -fno-implement-inlines To save space, do not emit out-of-line copies of inline functions controlled by "#pragma implementation". This causes linker errors if these functions are not inlined everywhere they are called. -fms-extensions Disable Wpedantic warnings about constructs used in MFC, such as implicit int and getting a pointer to member function via non-standard syntax. -fnew-inheriting-ctors Enable the P0136 adjustment to the semantics of C++11 constructor inheritance. This is part of C++17 but also considered to be a Defect Report against C++11 and C++14. This flag is enabled by default unless -fabi-version=10 or lower is specified. -fnew-ttp-matching Enable the P0522 resolution to Core issue 150, template template parameters and default arguments: this allows a template with default template arguments as an argument for a template template parameter with fewer template parameters. This flag is enabled by default for -std=c++17. -fno-nonansi-builtins Disable built-in declarations of functions that are not mandated by ANSI/ISO C. These include "ffs", "alloca", "_exit", "index", "bzero", "conjf", and other related functions. -fnothrow-opt Treat a "throw()" exception specification as if it were a "noexcept" specification to reduce or eliminate the text size overhead relative to a function with no exception specification. If the function has local variables of types with non-trivial destructors, the exception specification actually makes the function smaller because the EH cleanups for those variables can be optimized away. The semantic effect is that an exception thrown out of a function with such an exception specification results in a call to "terminate" rather than "unexpected". -fno-operator-names Do not treat the operator name keywords "and", "bitand", "bitor", "compl", "not", "or" and "xor" as synonyms as keywords. -fno-optional-diags Disable diagnostics that the standard says a compiler does not need to issue. Currently, the only such diagnostic issued by G++ is the one for a name having multiple meanings within a class. -fpermissive Downgrade some diagnostics about nonconformant code from errors to warnings. Thus, using -fpermissive allows some nonconforming code to compile. -fno-pretty-templates When an error message refers to a specialization of a function template, the compiler normally prints the signature of the template followed by the template arguments and any typedefs or typenames in the signature (e.g. "void f(T) [with T = int]" rather than "void f(int)") so that it's clear which template is involved. When an error message refers to a specialization of a class template, the compiler omits any template arguments that match the default template arguments for that template. If either of these behaviors make it harder to understand the error message rather than easier, you can use -fno-pretty-templates to disable them. -frepo Enable automatic template instantiation at link time. This option also implies -fno-implicit-templates. -fno-rtti Disable generation of information about every class with virtual functions for use by the C++ run-time type identification features ("dynamic_cast" and "typeid"). If you don't use those parts of the language, you can save some space by using this flag. Note that exception handling uses the same information, but G++ generates it as needed. The "dynamic_cast" operator can still be used for casts that do not require run-time type information, i.e. casts to "void *" or to unambiguous base classes. Mixing code compiled with -frtti with that compiled with -fno-rtti may not work. For example, programs may fail to link if a class compiled with -fno-rtti is used as a base for a class compiled with -frtti. -fsized-deallocation Enable the built-in global declarations void operator delete (void *, std::size_t) noexcept; void operator delete[] (void *, std::size_t) noexcept; as introduced in C++14. This is useful for user-defined replacement deallocation functions that, for example, use the size of the object to make deallocation faster. Enabled by default under -std=c++14 and above. The flag -Wsized-deallocation warns about places that might want to add a definition. -fstrict-enums Allow the compiler to optimize using the assumption that a value of enumerated type can only be one of the values of the enumeration (as defined in the C++ standard; basically, a value that can be represented in the minimum number of bits needed to represent all the enumerators). This assumption may not be valid if the program uses a cast to convert an arbitrary integer value to the enumerated type. -fstrong-eval-order Evaluate member access, array subscripting, and shift expressions in left-to-right order, and evaluate assignment in right-to-left order, as adopted for C++17. Enabled by default with -std=c++17. -fstrong-eval-order=some enables just the ordering of member access and shift expressions, and is the default without -std=c++17. -ftemplate-backtrace-limit=n Set the maximum number of template instantiation notes for a single warning or error to n. The default value is 10. -ftemplate-depth=n Set the maximum instantiation depth for template classes to n. A limit on the template instantiation depth is needed to detect endless recursions during template class instantiation. ANSI/ISO C++ conforming programs must not rely on a maximum depth greater than 17 (changed to 1024 in C++11). The default value is 900, as the compiler can run out of stack space before hitting 1024 in some situations. -fno-threadsafe-statics Do not emit the extra code to use the routines specified in the C++ ABI for thread-safe initialization of local statics. You can use this option to reduce code size slightly in code that doesn't need to be thread-safe. -fuse-cxa-atexit Register destructors for objects with static storage duration with the "__cxa_atexit" function rather than the "atexit" function. This option is required for fully standards- compliant handling of static destructors, but only works if your C library supports "__cxa_atexit". -fno-use-cxa-get-exception-ptr Don't use the "__cxa_get_exception_ptr" runtime routine. This causes "std::uncaught_exception" to be incorrect, but is necessary if the runtime routine is not available. -fvisibility-inlines-hidden This switch declares that the user does not attempt to compare pointers to inline functions or methods where the addresses of the two functions are taken in different shared objects. The effect of this is that GCC may, effectively, mark inline methods with "__attribute__ ((visibility ("hidden")))" so that they do not appear in the export table of a DSO and do not require a PLT indirection when used within the DSO. Enabling this option can have a dramatic effect on load and link times of a DSO as it massively reduces the size of the dynamic export table when the library makes heavy use of templates. The behavior of this switch is not quite the same as marking the methods as hidden directly, because it does not affect static variables local to the function or cause the compiler to deduce that the function is defined in only one shared object. You may mark a method as having a visibility explicitly to negate the effect of the switch for that method. For example, if you do want to compare pointers to a particular inline method, you might mark it as having default visibility. Marking the enclosing class with explicit visibility has no effect. Explicitly instantiated inline methods are unaffected by this option as their linkage might otherwise cross a shared library boundary. -fvisibility-ms-compat This flag attempts to use visibility settings to make GCC's C++ linkage model compatible with that of Microsoft Visual Studio. The flag makes these changes to GCC's linkage model: 1. It sets the default visibility to "hidden", like -fvisibility=hidden. 2. Types, but not their members, are not hidden by default. 3. The One Definition Rule is relaxed for types without explicit visibility specifications that are defined in more than one shared object: those declarations are permitted if they are permitted when this option is not used. In new code it is better to use -fvisibility=hidden and export those classes that are intended to be externally visible. Unfortunately it is possible for code to rely, perhaps accidentally, on the Visual Studio behavior. Among the consequences of these changes are that static data members of the same type with the same name but defined in different shared objects are different, so changing one does not change the other; and that pointers to function members defined in different shared objects may not compare equal. When this flag is given, it is a violation of the ODR to define types with the same name differently. -fno-weak Do not use weak symbol support, even if it is provided by the linker. By default, G++ uses weak symbols if they are available. This option exists only for testing, and should not be used by end-users; it results in inferior code and has no benefits. This option may be removed in a future release of G++. -nostdinc++ Do not search for header files in the standard directories specific to C++, but do still search the other standard directories. (This option is used when building the C++ library.) In addition, these optimization, warning, and code generation options have meanings only for C++ programs: -Wabi (C, Objective-C, C++ and Objective-C++ only) Warn when G++ it generates code that is probably not compatible with the vendor-neutral C++ ABI. Since G++ now defaults to updating the ABI with each major release, normally -Wabi will warn only if there is a check added later in a release series for an ABI issue discovered since the initial release. -Wabi will warn about more things if an older ABI version is selected (with -fabi-version=n). -Wabi can also be used with an explicit version number to warn about compatibility with a particular -fabi-version level, e.g. -Wabi=2 to warn about changes relative to -fabi-version=2. If an explicit version number is provided and -fabi-compat-version is not specified, the version number from this option is used for compatibility aliases. If no explicit version number is provided with this option, but -fabi-compat-version is specified, that version number is used for ABI warnings. Although an effort has been made to warn about all such cases, there are probably some cases that are not warned about, even though G++ is generating incompatible code. There may also be cases where warnings are emitted even though the code that is generated is compatible. You should rewrite your code to avoid these warnings if you are concerned about the fact that code generated by G++ may not be binary compatible with code generated by other compilers. Known incompatibilities in -fabi-version=2 (which was the default from GCC 3.4 to 4.9) include: * A template with a non-type template parameter of reference type was mangled incorrectly: extern int N; template <int &> struct S {}; void n (S<N>) {2} This was fixed in -fabi-version=3. * SIMD vector types declared using "__attribute ((vector_size))" were mangled in a non-standard way that does not allow for overloading of functions taking vectors of different sizes. The mangling was changed in -fabi-version=4. * "__attribute ((const))" and "noreturn" were mangled as type qualifiers, and "decltype" of a plain declaration was folded away. These mangling issues were fixed in -fabi-version=5. * Scoped enumerators passed as arguments to a variadic function are promoted like unscoped enumerators, causing "va_arg" to complain. On most targets this does not actually affect the parameter passing ABI, as there is no way to pass an argument smaller than "int". Also, the ABI changed the mangling of template argument packs, "const_cast", "static_cast", prefix increment/decrement, and a class scope function used as a template argument. These issues were corrected in -fabi-version=6. * Lambdas in default argument scope were mangled incorrectly, and the ABI changed the mangling of "nullptr_t". These issues were corrected in -fabi-version=7. * When mangling a function type with function-cv- qualifiers, the un-qualified function type was incorrectly treated as a substitution candidate. This was fixed in -fabi-version=8, the default for GCC 5.1. * "decltype(nullptr)" incorrectly had an alignment of 1, leading to unaligned accesses. Note that this did not affect the ABI of a function with a "nullptr_t" parameter, as parameters have a minimum alignment. This was fixed in -fabi-version=9, the default for GCC 5.2. * Target-specific attributes that affect the identity of a type, such as ia32 calling conventions on a function type (stdcall, regparm, etc.), did not affect the mangled name, leading to name collisions when function pointers were used as template arguments. This was fixed in -fabi-version=10, the default for GCC 6.1. It also warns about psABI-related changes. The known psABI changes at this point include: * For SysV/x86-64, unions with "long double" members are passed in memory as specified in psABI. For example: union U { long double ld; int i; }; "union U" is always passed in memory. -Wabi-tag (C++ and Objective-C++ only) Warn when a type with an ABI tag is used in a context that does not have that ABI tag. See C++ Attributes for more information about ABI tags. -Wctor-dtor-privacy (C++ and Objective-C++ only) Warn when a class seems unusable because all the constructors or destructors in that class are private, and it has neither friends nor public static member functions. Also warn if there are no non-private methods, and there's at least one private member function that isn't a constructor or destructor. -Wdelete-non-virtual-dtor (C++ and Objective-C++ only) Warn when "delete" is used to destroy an instance of a class that has virtual functions and non-virtual destructor. It is unsafe to delete an instance of a derived class through a pointer to a base class if the base class does not have a virtual destructor. This warning is enabled by -Wall. -Wdeprecated-copy (C++ and Objective-C++ only) Warn that the implicit declaration of a copy constructor or copy assignment operator is deprecated if the class has a user-provided copy constructor or copy assignment operator, in C++11 and up. This warning is enabled by -Wextra. With -Wdeprecated-copy-dtor, also deprecate if the class has a user-provided destructor. -Wno-init-list-lifetime (C++ and Objective-C++ only) Do not warn about uses of "std::initializer_list" that are likely to result in dangling pointers. Since the underlying array for an "initializer_list" is handled like a normal C++ temporary object, it is easy to inadvertently keep a pointer to the array past the end of the array's lifetime. For example: * If a function returns a temporary "initializer_list", or a local "initializer_list" variable, the array's lifetime ends at the end of the return statement, so the value returned has a dangling pointer. * If a new-expression creates an "initializer_list", the array only lives until the end of the enclosing full- expression, so the "initializer_list" in the heap has a dangling pointer. * When an "initializer_list" variable is assigned from a brace-enclosed initializer list, the temporary array created for the right side of the assignment only lives until the end of the full-expression, so at the next statement the "initializer_list" variable has a dangling pointer. // li's initial underlying array lives as long as li std::initializer_list<int> li = { 1,2,3 }; // assignment changes li to point to a temporary array li = { 4, 5 }; // now the temporary is gone and li has a dangling pointer int i = li.begin()[0] // undefined behavior * When a list constructor stores the "begin" pointer from the "initializer_list" argument, this doesn't extend the lifetime of the array, so if a class variable is constructed from a temporary "initializer_list", the pointer is left dangling by the end of the variable declaration statement. -Wliteral-suffix (C++ and Objective-C++ only) Warn when a string or character literal is followed by a ud- suffix which does not begin with an underscore. As a conforming extension, GCC treats such suffixes as separate preprocessing tokens in order to maintain backwards compatibility with code that uses formatting macros from "<inttypes.h>". For example: #define __STDC_FORMAT_MACROS #include <inttypes.h> #include <stdio.h> int main() { int64_t i64 = 123; printf("My int64: %" PRId64"\n", i64); } In this case, "PRId64" is treated as a separate preprocessing token. Additionally, warn when a user-defined literal operator is declared with a literal suffix identifier that doesn't begin with an underscore. Literal suffix identifiers that don't begin with an underscore are reserved for future standardization. This warning is enabled by default. -Wlto-type-mismatch During the link-time optimization warn about type mismatches in global declarations from different compilation units. Requires -flto to be enabled. Enabled by default. -Wno-narrowing (C++ and Objective-C++ only) For C++11 and later standards, narrowing conversions are diagnosed by default, as required by the standard. A narrowing conversion from a constant produces an error, and a narrowing conversion from a non-constant produces a warning, but -Wno-narrowing suppresses the diagnostic. Note that this does not affect the meaning of well-formed code; narrowing conversions are still considered ill-formed in SFINAE contexts. With -Wnarrowing in C++98, warn when a narrowing conversion prohibited by C++11 occurs within { }, e.g. int i = { 2.2 }; // error: narrowing from double to int This flag is included in -Wall and -Wc++11-compat. -Wnoexcept (C++ and Objective-C++ only) Warn when a noexcept-expression evaluates to false because of a call to a function that does not have a non-throwing exception specification (i.e. "throw()" or "noexcept") but is known by the compiler to never throw an exception. -Wnoexcept-type (C++ and Objective-C++ only) Warn if the C++17 feature making "noexcept" part of a function type changes the mangled name of a symbol relative to C++14. Enabled by -Wabi and -Wc++17-compat. As an example: template <class T> void f(T t) { t(); }; void g() noexcept; void h() { f(g); } In C++14, "f" calls "f<void(*)()>", but in C++17 it calls "f<void(*)()noexcept>". -Wclass-memaccess (C++ and Objective-C++ only) Warn when the destination of a call to a raw memory function such as "memset" or "memcpy" is an object of class type, and when writing into such an object might bypass the class non- trivial or deleted constructor or copy assignment, violate const-correctness or encapsulation, or corrupt virtual table pointers. Modifying the representation of such objects may violate invariants maintained by member functions of the class. For example, the call to "memset" below is undefined because it modifies a non-trivial class object and is, therefore, diagnosed. The safe way to either initialize or clear the storage of objects of such types is by using the appropriate constructor or assignment operator, if one is available. std::string str = "abc"; memset (&str, 0, sizeof str); The -Wclass-memaccess option is enabled by -Wall. Explicitly casting the pointer to the class object to "void *" or to a type that can be safely accessed by the raw memory function suppresses the warning. -Wnon-virtual-dtor (C++ and Objective-C++ only) Warn when a class has virtual functions and an accessible non-virtual destructor itself or in an accessible polymorphic base class, in which case it is possible but unsafe to delete an instance of a derived class through a pointer to the class itself or base class. This warning is automatically enabled if -Weffc++ is specified. -Wregister (C++ and Objective-C++ only) Warn on uses of the "register" storage class specifier, except when it is part of the GNU Explicit Register Variables extension. The use of the "register" keyword as storage class specifier has been deprecated in C++11 and removed in C++17. Enabled by default with -std=c++17. -Wreorder (C++ and Objective-C++ only) Warn when the order of member initializers given in the code does not match the order in which they must be executed. For instance: struct A { int i; int j; A(): j (0), i (1) { } }; The compiler rearranges the member initializers for "i" and "j" to match the declaration order of the members, emitting a warning to that effect. This warning is enabled by -Wall. -Wno-pessimizing-move (C++ and Objective-C++ only) This warning warns when a call to "std::move" prevents copy elision. A typical scenario when copy elision can occur is when returning in a function with a class return type, when the expression being returned is the name of a non-volatile automatic object, and is not a function parameter, and has the same type as the function return type. struct T { ... }; T fn() { T t; ... return std::move (t); } But in this example, the "std::move" call prevents copy elision. This warning is enabled by -Wall. -Wno-redundant-move (C++ and Objective-C++ only) This warning warns about redundant calls to "std::move"; that is, when a move operation would have been performed even without the "std::move" call. This happens because the compiler is forced to treat the object as if it were an rvalue in certain situations such as returning a local variable, where copy elision isn't applicable. Consider: struct T { ... }; T fn(T t) { ... return std::move (t); } Here, the "std::move" call is redundant. Because G++ implements Core Issue 1579, another example is: struct T { // convertible to U ... }; struct U { ... }; U fn() { T t; ... return std::move (t); } In this example, copy elision isn't applicable because the type of the expression being returned and the function return type differ, yet G++ treats the return value as if it were designated by an rvalue. This warning is enabled by -Wextra. -fext-numeric-literals (C++ and Objective-C++ only) Accept imaginary, fixed-point, or machine-defined literal number suffixes as GNU extensions. When this option is turned off these suffixes are treated as C++11 user-defined literal numeric suffixes. This is on by default for all pre-C++11 dialects and all GNU dialects: -std=c++98, -std=gnu++98, -std=gnu++11, -std=gnu++14. This option is off by default for ISO C++11 onwards (-std=c++11, ...). The following -W... options are not affected by -Wall. -Weffc++ (C++ and Objective-C++ only) Warn about violations of the following style guidelines from Scott Meyers' Effective C++ series of books: * Define a copy constructor and an assignment operator for classes with dynamically-allocated memory. * Prefer initialization to assignment in constructors. * Have "operator=" return a reference to *this. * Don't try to return a reference when you must return an object. * Distinguish between prefix and postfix forms of increment and decrement operators. * Never overload "&&", "||", or ",". This option also enables -Wnon-virtual-dtor, which is also one of the effective C++ recommendations. However, the check is extended to warn about the lack of virtual destructor in accessible non-polymorphic bases classes too. When selecting this option, be aware that the standard library headers do not obey all of these guidelines; use grep -v to filter out those warnings. -Wstrict-null-sentinel (C++ and Objective-C++ only) Warn about the use of an uncasted "NULL" as sentinel. When compiling only with GCC this is a valid sentinel, as "NULL" is defined to "__null". Although it is a null pointer constant rather than a null pointer, it is guaranteed to be of the same size as a pointer. But this use is not portable across different compilers. -Wno-non-template-friend (C++ and Objective-C++ only) Disable warnings when non-template friend functions are declared within a template. In very old versions of GCC that predate implementation of the ISO standard, declarations such as friend int foo(int), where the name of the friend is an unqualified-id, could be interpreted as a particular specialization of a template function; the warning exists to diagnose compatibility problems, and is enabled by default. -Wold-style-cast (C++ and Objective-C++ only) Warn if an old-style (C-style) cast to a non-void type is used within a C++ program. The new-style casts ("dynamic_cast", "static_cast", "reinterpret_cast", and "const_cast") are less vulnerable to unintended effects and much easier to search for. -Woverloaded-virtual (C++ and Objective-C++ only) Warn when a function declaration hides virtual functions from a base class. For example, in: struct A { virtual void f(); }; struct B: public A { void f(int); }; the "A" class version of "f" is hidden in "B", and code like: B* b; b->f(); fails to compile. -Wno-pmf-conversions (C++ and Objective-C++ only) Disable the diagnostic for converting a bound pointer to member function to a plain pointer. -Wsign-promo (C++ and Objective-C++ only) Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size. Previous versions of G++ tried to preserve unsignedness, but the standard mandates the current behavior. -Wtemplates (C++ and Objective-C++ only) Warn when a primary template declaration is encountered. Some coding rules disallow templates, and this may be used to enforce that rule. The warning is inactive inside a system header file, such as the STL, so one can still use the STL. One may also instantiate or specialize templates. -Wmultiple-inheritance (C++ and Objective-C++ only) Warn when a class is defined with multiple direct base classes. Some coding rules disallow multiple inheritance, and this may be used to enforce that rule. The warning is inactive inside a system header file, such as the STL, so one can still use the STL. One may also define classes that indirectly use multiple inheritance. -Wvirtual-inheritance Warn when a class is defined with a virtual direct base class. Some coding rules disallow multiple inheritance, and this may be used to enforce that rule. The warning is inactive inside a system header file, such as the STL, so one can still use the STL. One may also define classes that indirectly use virtual inheritance. -Wnamespaces Warn when a namespace definition is opened. Some coding rules disallow namespaces, and this may be used to enforce that rule. The warning is inactive inside a system header file, such as the STL, so one can still use the STL. One may also use using directives and qualified names. -Wno-terminate (C++ and Objective-C++ only) Disable the warning about a throw-expression that will immediately result in a call to "terminate". -Wno-class-conversion (C++ and Objective-C++ only) Disable the warning about the case when a conversion function converts an object to the same type, to a base class of that type, or to void; such a conversion function will never be called. Options Controlling Objective-C and Objective-C++ Dialects (NOTE: This manual does not describe the Objective-C and Objective-C++ languages themselves. This section describes the command-line options that are only meaningful for Objective-C and Objective-C++ programs. You can also use most of the language-independent GNU compiler options. For example, you might compile a file some_class.m like this: gcc -g -fgnu-runtime -O -c some_class.m In this example, -fgnu-runtime is an option meant only for Objective-C and Objective-C++ programs; you can use the other options with any language supported by GCC. Note that since Objective-C is an extension of the C language, Objective-C compilations may also use options specific to the C front-end (e.g., -Wtraditional). Similarly, Objective-C++ compilations may use C++-specific options (e.g., -Wabi). Here is a list of options that are only for compiling Objective-C and Objective-C++ programs: -fconstant-string-class=class-name Use class-name as the name of the class to instantiate for each literal string specified with the syntax "@"..."". The default class name is "NXConstantString" if the GNU runtime is being used, and "NSConstantString" if the NeXT runtime is being used (see below). The -fconstant-cfstrings option, if also present, overrides the -fconstant-string-class setting and cause "@"..."" literals to be laid out as constant CoreFoundation strings. -fgnu-runtime Generate object code compatible with the standard GNU Objective-C runtime. This is the default for most types of systems. -fnext-runtime Generate output compatible with the NeXT runtime. This is the default for NeXT-based systems, including Darwin and Mac OS X. The macro "__NEXT_RUNTIME__" is predefined if (and only if) this option is used. -fno-nil-receivers Assume that all Objective-C message dispatches ("[receiver message:arg]") in this translation unit ensure that the receiver is not "nil". This allows for more efficient entry points in the runtime to be used. This option is only available in conjunction with the NeXT runtime and ABI version 0 or 1. -fobjc-abi-version=n Use version n of the Objective-C ABI for the selected runtime. This option is currently supported only for the NeXT runtime. In that case, Version 0 is the traditional (32-bit) ABI without support for properties and other Objective-C 2.0 additions. Version 1 is the traditional (32-bit) ABI with support for properties and other Objective- C 2.0 additions. Version 2 is the modern (64-bit) ABI. If nothing is specified, the default is Version 0 on 32-bit target machines, and Version 2 on 64-bit target machines. -fobjc-call-cxx-cdtors For each Objective-C class, check if any of its instance variables is a C++ object with a non-trivial default constructor. If so, synthesize a special "- (id) .cxx_construct" instance method which runs non-trivial default constructors on any such instance variables, in order, and then return "self". Similarly, check if any instance variable is a C++ object with a non-trivial destructor, and if so, synthesize a special "- (void) .cxx_destruct" method which runs all such default destructors, in reverse order. The "- (id) .cxx_construct" and "- (void) .cxx_destruct" methods thusly generated only operate on instance variables declared in the current Objective-C class, and not those inherited from superclasses. It is the responsibility of the Objective-C runtime to invoke all such methods in an object's inheritance hierarchy. The "- (id) .cxx_construct" methods are invoked by the runtime immediately after a new object instance is allocated; the "- (void) .cxx_destruct" methods are invoked immediately before the runtime deallocates an object instance. As of this writing, only the NeXT runtime on Mac OS X 10.4 and later has support for invoking the "- (id) .cxx_construct" and "- (void) .cxx_destruct" methods. -fobjc-direct-dispatch Allow fast jumps to the message dispatcher. On Darwin this is accomplished via the comm page. -fobjc-exceptions Enable syntactic support for structured exception handling in Objective-C, similar to what is offered by C++. This option is required to use the Objective-C keywords @try, @throw, @catch, @finally and @synchronized. This option is available with both the GNU runtime and the NeXT runtime (but not available in conjunction with the NeXT runtime on Mac OS X 10.2 and earlier). -fobjc-gc Enable garbage collection (GC) in Objective-C and Objective-C++ programs. This option is only available with the NeXT runtime; the GNU runtime has a different garbage collection implementation that does not require special compiler flags. -fobjc-nilcheck For the NeXT runtime with version 2 of the ABI, check for a nil receiver in method invocations before doing the actual method call. This is the default and can be disabled using -fno-objc-nilcheck. Class methods and super calls are never checked for nil in this way no matter what this flag is set to. Currently this flag does nothing when the GNU runtime, or an older version of the NeXT runtime ABI, is used. -fobjc-std=objc1 Conform to the language syntax of Objective-C 1.0, the language recognized by GCC 4.0. This only affects the Objective-C additions to the C/C++ language; it does not affect conformance to C/C++ standards, which is controlled by the separate C/C++ dialect option flags. When this option is used with the Objective-C or Objective-C++ compiler, any Objective-C syntax that is not recognized by GCC 4.0 is rejected. This is useful if you need to make sure that your Objective-C code can be compiled with older versions of GCC. -freplace-objc-classes Emit a special marker instructing ld(1) not to statically link in the resulting object file, and allow dyld(1) to load it in at run time instead. This is used in conjunction with the Fix-and-Continue debugging mode, where the object file in question may be recompiled and dynamically reloaded in the course of program execution, without the need to restart the program itself. Currently, Fix-and-Continue functionality is only available in conjunction with the NeXT runtime on Mac OS X 10.3 and later. -fzero-link When compiling for the NeXT runtime, the compiler ordinarily replaces calls to "objc_getClass("...")" (when the name of the class is known at compile time) with static class references that get initialized at load time, which improves run-time performance. Specifying the -fzero-link flag suppresses this behavior and causes calls to "objc_getClass("...")" to be retained. This is useful in Zero-Link debugging mode, since it allows for individual class implementations to be modified during program execution. The GNU runtime currently always retains calls to "objc_get_class("...")" regardless of command-line options. -fno-local-ivars By default instance variables in Objective-C can be accessed as if they were local variables from within the methods of the class they're declared in. This can lead to shadowing between instance variables and other variables declared either locally inside a class method or globally with the same name. Specifying the -fno-local-ivars flag disables this behavior thus avoiding variable shadowing issues. -fivar-visibility=[public|protected|private|package] Set the default instance variable visibility to the specified option so that instance variables declared outside the scope of any access modifier directives default to the specified visibility. -gen-decls Dump interface declarations for all classes seen in the source file to a file named sourcename.decl. -Wassign-intercept (Objective-C and Objective-C++ only) Warn whenever an Objective-C assignment is being intercepted by the garbage collector. -Wno-protocol (Objective-C and Objective-C++ only) If a class is declared to implement a protocol, a warning is issued for every method in the protocol that is not implemented by the class. The default behavior is to issue a warning for every method not explicitly implemented in the class, even if a method implementation is inherited from the superclass. If you use the -Wno-protocol option, then methods inherited from the superclass are considered to be implemented, and no warning is issued for them. -Wselector (Objective-C and Objective-C++ only) Warn if multiple methods of different types for the same selector are found during compilation. The check is performed on the list of methods in the final stage of compilation. Additionally, a check is performed for each selector appearing in a "@selector(...)" expression, and a corresponding method for that selector has been found during compilation. Because these checks scan the method table only at the end of compilation, these warnings are not produced if the final stage of compilation is not reached, for example because an error is found during compilation, or because the -fsyntax-only option is being used. -Wstrict-selector-match (Objective-C and Objective-C++ only) Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type "id" or "Class". When this flag is off (which is the default behavior), the compiler omits such warnings if any differences found are confined to types that share the same size and alignment. -Wundeclared-selector (Objective-C and Objective-C++ only) Warn if a "@selector(...)" expression referring to an undeclared selector is found. A selector is considered undeclared if no method with that name has been declared before the "@selector(...)" expression, either explicitly in an @interface or @protocol declaration, or implicitly in an @implementation section. This option always performs its checks as soon as a "@selector(...)" expression is found, while -Wselector only performs its checks in the final stage of compilation. This also enforces the coding style convention that methods and selectors must be declared before being used. -print-objc-runtime-info Generate C header describing the largest structure that is passed by value, if any. Options to Control Diagnostic Messages Formatting Traditionally, diagnostic messages have been formatted irrespective of the output device's aspect (e.g. its width, ...). You can use the options described below to control the formatting algorithm for diagnostic messages, e.g. how many characters per line, how often source location information should be reported. Note that some language front ends may not honor these options. -fmessage-length=n Try to format error messages so that they fit on lines of about n characters. If n is zero, then no line-wrapping is done; each error message appears on a single line. This is the default for all front ends. Note - this option also affects the display of the #error and #warning pre-processor directives, and the deprecated function/type/variable attribute. It does not however affect the pragma GCC warning and pragma GCC error pragmas. -fdiagnostics-show-location=once Only meaningful in line-wrapping mode. Instructs the diagnostic messages reporter to emit source location information once; that is, in case the message is too long to fit on a single physical line and has to be wrapped, the source location won't be emitted (as prefix) again, over and over, in subsequent continuation lines. This is the default behavior. -fdiagnostics-show-location=every-line Only meaningful in line-wrapping mode. Instructs the diagnostic messages reporter to emit the same source location information (as prefix) for physical lines that result from the process of breaking a message which is too long to fit on a single line. -fdiagnostics-color[=WHEN] -fno-diagnostics-color Use color in diagnostics. WHEN is never, always, or auto. The default depends on how the compiler has been configured, it can be any of the above WHEN options or also never if GCC_COLORS environment variable isn't present in the environment, and auto otherwise. auto means to use color only when the standard error is a terminal. The forms -fdiagnostics-color and -fno-diagnostics-color are aliases for -fdiagnostics-color=always and -fdiagnostics-color=never, respectively. The colors are defined by the environment variable GCC_COLORS. Its value is a colon-separated list of capabilities and Select Graphic Rendition (SGR) substrings. SGR commands are interpreted by the terminal or terminal emulator. (See the section in the documentation of your text terminal for permitted values and their meanings as character attributes.) These substring values are integers in decimal representation and can be concatenated with semicolons. Common values to concatenate include 1 for bold, 4 for underline, 5 for blink, 7 for inverse, 39 for default foreground color, 30 to 37 for foreground colors, 90 to 97 for 16-color mode foreground colors, 38;5;0 to 38;5;255 for 88-color and 256-color modes foreground colors, 49 for default background color, 40 to 47 for background colors, 100 to 107 for 16-color mode background colors, and 48;5;0 to 48;5;255 for 88-color and 256-color modes background colors. The default GCC_COLORS is error=01;31:warning=01;35:note=01;36:range1=32:range2=34:locus=01:\ quote=01:fixit-insert=32:fixit-delete=31:\ diff-filename=01:diff-hunk=32:diff-delete=31:diff-insert=32:\ type-diff=01;32 where 01;31 is bold red, 01;35 is bold magenta, 01;36 is bold cyan, 32 is green, 34 is blue, 01 is bold, and 31 is red. Setting GCC_COLORS to the empty string disables colors. Supported capabilities are as follows. "error=" SGR substring for error: markers. "warning=" SGR substring for warning: markers. "note=" SGR substring for note: markers. "range1=" SGR substring for first additional range. "range2=" SGR substring for second additional range. "locus=" SGR substring for location information, file:line or file:line:column etc. "quote=" SGR substring for information printed within quotes. "fixit-insert=" SGR substring for fix-it hints suggesting text to be inserted or replaced. "fixit-delete=" SGR substring for fix-it hints suggesting text to be deleted. "diff-filename=" SGR substring for filename headers within generated patches. "diff-hunk=" SGR substring for the starts of hunks within generated patches. "diff-delete=" SGR substring for deleted lines within generated patches. "diff-insert=" SGR substring for inserted lines within generated patches. "type-diff=" SGR substring for highlighting mismatching types within template arguments in the C++ frontend. -fno-diagnostics-show-option By default, each diagnostic emitted includes text indicating the command-line option that directly controls the diagnostic (if such an option is known to the diagnostic machinery). Specifying the -fno-diagnostics-show-option flag suppresses that behavior. -fno-diagnostics-show-caret By default, each diagnostic emitted includes the original source line and a caret ^ indicating the column. This option suppresses this information. The source line is truncated to n characters, if the -fmessage-length=n option is given. When the output is done to the terminal, the width is limited to the width given by the COLUMNS environment variable or, if not set, to the terminal width. -fno-diagnostics-show-labels By default, when printing source code (via -fdiagnostics-show-caret), diagnostics can label ranges of source code with pertinent information, such as the types of expressions: printf ("foo %s bar", long_i + long_j); ~^ ~~~~~~~~~~~~~~~ | | char * long int This option suppresses the printing of these labels (in the example above, the vertical bars and the "char *" and "long int" text). -fno-diagnostics-show-line-numbers By default, when printing source code (via -fdiagnostics-show-caret), a left margin is printed, showing line numbers. This option suppresses this left margin. -fdiagnostics-minimum-margin-width=width This option controls the minimum width of the left margin printed by -fdiagnostics-show-line-numbers. It defaults to 6. -fdiagnostics-parseable-fixits Emit fix-it hints in a machine-parseable format, suitable for consumption by IDEs. For each fix-it, a line will be printed after the relevant diagnostic, starting with the string "fix- it:". For example: fix-it:"test.c":{45:3-45:21}:"gtk_widget_show_all" The location is expressed as a half-open range, expressed as a count of bytes, starting at byte 1 for the initial column. In the above example, bytes 3 through 20 of line 45 of "test.c" are to be replaced with the given string: 00000000011111111112222222222 12345678901234567890123456789 gtk_widget_showall (dlg); ^^^^^^^^^^^^^^^^^^ gtk_widget_show_all The filename and replacement string escape backslash as "\\", tab as "\t", newline as "\n", double quotes as "\"", non- printable characters as octal (e.g. vertical tab as "\013"). An empty replacement string indicates that the given range is to be removed. An empty range (e.g. "45:3-45:3") indicates that the string is to be inserted at the given position. -fdiagnostics-generate-patch Print fix-it hints to stderr in unified diff format, after any diagnostics are printed. For example: --- test.c +++ test.c @ -42,5 +42,5 @ void show_cb(GtkDialog *dlg) { - gtk_widget_showall(dlg); + gtk_widget_show_all(dlg); } The diff may or may not be colorized, following the same rules as for diagnostics (see -fdiagnostics-color). -fdiagnostics-show-template-tree In the C++ frontend, when printing diagnostics showing mismatching template types, such as: could not convert 'std::map<int, std::vector<double> >()' from 'map<[...],vector<double>>' to 'map<[...],vector<float>> the -fdiagnostics-show-template-tree flag enables printing a tree-like structure showing the common and differing parts of the types, such as: map< [...], vector< [double != float]>> The parts that differ are highlighted with color ("double" and "float" in this case). -fno-elide-type By default when the C++ frontend prints diagnostics showing mismatching template types, common parts of the types are printed as "[...]" to simplify the error message. For example: could not convert 'std::map<int, std::vector<double> >()' from 'map<[...],vector<double>>' to 'map<[...],vector<float>> Specifying the -fno-elide-type flag suppresses that behavior. This flag also affects the output of the -fdiagnostics-show-template-tree flag. -fno-show-column Do not print column numbers in diagnostics. This may be necessary if diagnostics are being scanned by a program that does not understand the column numbers, such as dejagnu. -fdiagnostics-format=FORMAT Select a different format for printing diagnostics. FORMAT is text or json. The default is text. The json format consists of a top-level JSON array containing JSON objects representing the diagnostics. The JSON is emitted as one line, without formatting; the examples below have been formatted for clarity. Diagnostics can have child diagnostics. For example, this error and note: misleading-indentation.c:15:3: warning: this 'if' clause does not guard... [-Wmisleading-indentation] 15 | if (flag) | ^~ misleading-indentation.c:17:5: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the 'if' 17 | y = 2; | ^ might be printed in JSON form (after formatting) like this: [ { "kind": "warning", "locations": [ { "caret": { "column": 3, "file": "misleading-indentation.c", "line": 15 }, "finish": { "column": 4, "file": "misleading-indentation.c", "line": 15 } } ], "message": "this \u2018if\u2019 clause does not guard...", "option": "-Wmisleading-indentation", "children": [ { "kind": "note", "locations": [ { "caret": { "column": 5, "file": "misleading-indentation.c", "line": 17 } } ], "message": "...this statement, but the latter is ..." } ] }, ... ] where the "note" is a child of the "warning". A diagnostic has a "kind". If this is "warning", then there is an "option" key describing the command-line option controlling the warning. A diagnostic can contain zero or more locations. Each location has up to three positions within it: a "caret" position and optional "start" and "finish" positions. A location can also have an optional "label" string. For example, this error: bad-binary-ops.c:64:23: error: invalid operands to binary + (have 'S' {aka 'struct s'} and 'T' {aka 'struct t'}) 64 | return callee_4a () + callee_4b (); | ~~~~~~~~~~~~ ^ ~~~~~~~~~~~~ | | | | | T {aka struct t} | S {aka struct s} has three locations. Its primary location is at the "+" token at column 23. It has two secondary locations, describing the left and right-hand sides of the expression, which have labels. It might be printed in JSON form as: { "children": [], "kind": "error", "locations": [ { "caret": { "column": 23, "file": "bad-binary-ops.c", "line": 64 } }, { "caret": { "column": 10, "file": "bad-binary-ops.c", "line": 64 }, "finish": { "column": 21, "file": "bad-binary-ops.c", "line": 64 }, "label": "S {aka struct s}" }, { "caret": { "column": 25, "file": "bad-binary-ops.c", "line": 64 }, "finish": { "column": 36, "file": "bad-binary-ops.c", "line": 64 }, "label": "T {aka struct t}" } ], "message": "invalid operands to binary + ..." } If a diagnostic contains fix-it hints, it has a "fixits" array, consisting of half-open intervals, similar to the output of -fdiagnostics-parseable-fixits. For example, this diagnostic with a replacement fix-it hint: demo.c:8:15: error: 'struct s' has no member named 'colour'; did you mean 'color'? 8 | return ptr->colour; | ^~~~~~ | color might be printed in JSON form as: { "children": [], "fixits": [ { "next": { "column": 21, "file": "demo.c", "line": 8 }, "start": { "column": 15, "file": "demo.c", "line": 8 }, "string": "color" } ], "kind": "error", "locations": [ { "caret": { "column": 15, "file": "demo.c", "line": 8 }, "finish": { "column": 20, "file": "demo.c", "line": 8 } } ], "message": "\u2018struct s\u2019 has no member named ..." } where the fix-it hint suggests replacing the text from "start" up to but not including "next" with "string"'s value. Deletions are expressed via an empty value for "string", insertions by having "start" equal "next". Options to Request or Suppress Warnings Warnings are diagnostic messages that report constructions that are not inherently erroneous but that are risky or suggest there may have been an error. The following language-independent options do not enable specific warnings but control the kinds of diagnostics produced by GCC. -fsyntax-only Check the code for syntax errors, but don't do anything beyond that. -fmax-errors=n Limits the maximum number of error messages to n, at which point GCC bails out rather than attempting to continue processing the source code. If n is 0 (the default), there is no limit on the number of error messages produced. If -Wfatal-errors is also specified, then -Wfatal-errors takes precedence over this option. -w Inhibit all warning messages. -Werror Make all warnings into errors. -Werror= Make the specified warning into an error. The specifier for a warning is appended; for example -Werror=switch turns the warnings controlled by -Wswitch into errors. This switch takes a negative form, to be used to negate -Werror for specific warnings; for example -Wno-error=switch makes -Wswitch warnings not be errors, even when -Werror is in effect. The warning message for each controllable warning includes the option that controls the warning. That option can then be used with -Werror= and -Wno-error= as described above. (Printing of the option in the warning message can be disabled using the -fno-diagnostics-show-option flag.) Note that specifying -Werror=foo automatically implies -Wfoo. However, -Wno-error=foo does not imply anything. -Wfatal-errors This option causes the compiler to abort compilation on the first error occurred rather than trying to keep going and printing further error messages. You can request many specific warnings with options beginning with -W, for example -Wimplicit to request warnings on implicit declarations. Each of these specific warning options also has a negative form beginning -Wno- to turn off warnings; for example, -Wno-implicit. This manual lists only one of the two forms, whichever is not the default. For further language-specific options also refer to C++ Dialect Options and Objective-C and Objective-C++ Dialect Options. Some options, such as -Wall and -Wextra, turn on other options, such as -Wunused, which may turn on further options, such as -Wunused-value. The combined effect of positive and negative forms is that more specific options have priority over less specific ones, independently of their position in the command- line. For options of the same specificity, the last one takes effect. Options enabled or disabled via pragmas take effect as if they appeared at the end of the command-line. When an unrecognized warning option is requested (e.g., -Wunknown-warning), GCC emits a diagnostic stating that the option is not recognized. However, if the -Wno- form is used, the behavior is slightly different: no diagnostic is produced for -Wno-unknown-warning unless other diagnostics are being produced. This allows the use of new -Wno- options with old compilers, but if something goes wrong, the compiler warns that an unrecognized option is present. The effectiveness of some warnings depends on optimizations also being enabled. For example -Wsuggest-final-types is more effective with link-time optimization and -Wmaybe-uninitialized will not warn at all unless optimization is enabled. -Wpedantic -pedantic Issue all the warnings demanded by strict ISO C and ISO C++; reject all programs that use forbidden extensions, and some other programs that do not follow ISO C and ISO C++. For ISO C, follows the version of the ISO C standard specified by any -std option used. Valid ISO C and ISO C++ programs should compile properly with or without this option (though a rare few require -ansi or a -std option specifying the required version of ISO C). However, without this option, certain GNU extensions and traditional C and C++ features are supported as well. With this option, they are rejected. -Wpedantic does not cause warning messages for use of the alternate keywords whose names begin and end with __. Pedantic warnings are also disabled in the expression that follows "__extension__". However, only system header files should use these escape routes; application programs should avoid them. Some users try to use -Wpedantic to check programs for strict ISO C conformance. They soon find that it does not do quite what they want: it finds some non-ISO practices, but not all---only those for which ISO C requires a diagnostic, and some others for which diagnostics have been added. A feature to report any failure to conform to ISO C might be useful in some instances, but would require considerable additional work and would be quite different from -Wpedantic. We don't have plans to support such a feature in the near future. Where the standard specified with -std represents a GNU extended dialect of C, such as gnu90 or gnu99, there is a corresponding base standard, the version of ISO C on which the GNU extended dialect is based. Warnings from -Wpedantic are given where they are required by the base standard. (It does not make sense for such warnings to be given only for features not in the specified GNU C dialect, since by definition the GNU dialects of C include all features the compiler supports with the given option, and there would be nothing to warn about.) -pedantic-errors Give an error whenever the base standard (see -Wpedantic) requires a diagnostic, in some cases where there is undefined behavior at compile-time and in some other cases that do not prevent compilation of programs that are valid according to the standard. This is not equivalent to -Werror=pedantic, since there are errors enabled by this option and not enabled by the latter and vice versa. -Wall This enables all the warnings about constructions that some users consider questionable, and that are easy to avoid (or modify to prevent the warning), even in conjunction with macros. This also enables some language-specific warnings described in C++ Dialect Options and Objective-C and Objective-C++ Dialect Options. -Wall turns on the following warning flags: -Waddress -Warray-bounds=1 (only with -O2) -Wbool-compare -Wbool-operation -Wc++11-compat -Wc++14-compat -Wcatch-value (C++ and Objective-C++ only) -Wchar-subscripts -Wcomment -Wduplicate-decl-specifier (C and Objective-C only) -Wenum-compare (in C/ObjC; this is on by default in C++) -Wformat -Wint-in-bool-context -Wimplicit (C and Objective-C only) -Wimplicit-int (C and Objective-C only) -Wimplicit-function-declaration (C and Objective-C only) -Winit-self (only for C++) -Wlogical-not-parentheses -Wmain (only for C/ObjC and unless -ffreestanding) -Wmaybe-uninitialized -Wmemset-elt-size -Wmemset-transposed-args -Wmisleading-indentation (only for C/C++) -Wmissing-attributes -Wmissing-braces (only for C/ObjC) -Wmultistatement-macros -Wnarrowing (only for C++) -Wnonnull -Wnonnull-compare -Wopenmp-simd -Wparentheses -Wpessimizing-move (only for C++) -Wpointer-sign -Wreorder -Wrestrict -Wreturn-type -Wsequence-point -Wsign-compare (only in C++) -Wsizeof-pointer-div -Wsizeof-pointer-memaccess -Wstrict-aliasing -Wstrict-overflow=1 -Wswitch -Wtautological-compare -Wtrigraphs -Wuninitialized -Wunknown-pragmas -Wunused-function -Wunused-label -Wunused-value -Wunused-variable -Wvolatile-register-var Note that some warning flags are not implied by -Wall. Some of them warn about constructions that users generally do not consider questionable, but which occasionally you might wish to check for; others warn about constructions that are necessary or hard to avoid in some cases, and there is no simple way to modify the code to suppress the warning. Some of them are enabled by -Wextra but many of them must be enabled individually. -Wextra This enables some extra warning flags that are not enabled by -Wall. (This option used to be called -W. The older name is still supported, but the newer name is more descriptive.) -Wclobbered -Wcast-function-type -Wdeprecated-copy (C++ only) -Wempty-body -Wignored-qualifiers -Wimplicit-fallthrough=3 -Wmissing-field-initializers -Wmissing-parameter-type (C only) -Wold-style-declaration (C only) -Woverride-init -Wsign-compare (C only) -Wredundant-move (only for C++) -Wtype-limits -Wuninitialized -Wshift-negative-value (in C++11 to C++17 and in C99 and newer) -Wunused-parameter (only with -Wunused or -Wall) -Wunused-but-set-parameter (only with -Wunused or -Wall) The option -Wextra also prints warning messages for the following cases: * A pointer is compared against integer zero with "<", "<=", ">", or ">=". * (C++ only) An enumerator and a non-enumerator both appear in a conditional expression. * (C++ only) Ambiguous virtual bases. * (C++ only) Subscripting an array that has been declared "register". * (C++ only) Taking the address of a variable that has been declared "register". * (C++ only) A base class is not initialized in the copy constructor of a derived class. -Wchar-subscripts Warn if an array subscript has type "char". This is a common cause of error, as programmers often forget that this type is signed on some machines. This warning is enabled by -Wall. -Wno-coverage-mismatch Warn if feedback profiles do not match when using the -fprofile-use option. If a source file is changed between compiling with -fprofile-generate and with -fprofile-use, the files with the profile feedback can fail to match the source file and GCC cannot use the profile feedback information. By default, this warning is enabled and is treated as an error. -Wno-coverage-mismatch can be used to disable the warning or -Wno-error=coverage-mismatch can be used to disable the error. Disabling the error for this warning can result in poorly optimized code and is useful only in the case of very minor changes such as bug fixes to an existing code-base. Completely disabling the warning is not recommended. -Wno-cpp (C, Objective-C, C++, Objective-C++ and Fortran only) Suppress warning messages emitted by "#warning" directives. -Wdouble-promotion (C, C++, Objective-C and Objective-C++ only) Give a warning when a value of type "float" is implicitly promoted to "double". CPUs with a 32-bit "single-precision" floating-point unit implement "float" in hardware, but emulate "double" in software. On such a machine, doing computations using "double" values is much more expensive because of the overhead required for software emulation. It is easy to accidentally do computations with "double" because floating-point literals are implicitly of type "double". For example, in: float area(float radius) { return 3.14159 * radius * radius; } the compiler performs the entire computation with "double" because the floating-point literal is a "double". -Wduplicate-decl-specifier (C and Objective-C only) Warn if a declaration has duplicate "const", "volatile", "restrict" or "_Atomic" specifier. This warning is enabled by -Wall. -Wformat -Wformat=n Check calls to "printf" and "scanf", etc., to make sure that the arguments supplied have types appropriate to the format string specified, and that the conversions specified in the format string make sense. This includes standard functions, and others specified by format attributes, in the "printf", "scanf", "strftime" and "strfmon" (an X/Open extension, not in the C standard) families (or other target-specific families). Which functions are checked without format attributes having been specified depends on the standard version selected, and such checks of functions without the attribute specified are disabled by -ffreestanding or -fno-builtin. The formats are checked against the format features supported by GNU libc version 2.2. These include all ISO C90 and C99 features, as well as features from the Single Unix Specification and some BSD and GNU extensions. Other library implementations may not support all these features; GCC does not support warning about features that go beyond a particular library's limitations. However, if -Wpedantic is used with -Wformat, warnings are given about format features not in the selected standard version (but not for "strfmon" formats, since those are not in any version of the C standard). -Wformat=1 -Wformat Option -Wformat is equivalent to -Wformat=1, and -Wno-format is equivalent to -Wformat=0. Since -Wformat also checks for null format arguments for several functions, -Wformat also implies -Wnonnull. Some aspects of this level of format checking can be disabled by the options: -Wno-format-contains-nul, -Wno-format-extra-args, and -Wno-format-zero-length. -Wformat is enabled by -Wall. -Wno-format-contains-nul If -Wformat is specified, do not warn about format strings that contain NUL bytes. -Wno-format-extra-args If -Wformat is specified, do not warn about excess arguments to a "printf" or "scanf" format function. The C standard specifies that such arguments are ignored. Where the unused arguments lie between used arguments that are specified with $ operand number specifications, normally warnings are still given, since the implementation could not know what type to pass to "va_arg" to skip the unused arguments. However, in the case of "scanf" formats, this option suppresses the warning if the unused arguments are all pointers, since the Single Unix Specification says that such unused arguments are allowed. -Wformat-overflow -Wformat-overflow=level Warn about calls to formatted input/output functions such as "sprintf" and "vsprintf" that might overflow the destination buffer. When the exact number of bytes written by a format directive cannot be determined at compile-time it is estimated based on heuristics that depend on the level argument and on optimization. While enabling optimization will in most cases improve the accuracy of the warning, it may also result in false positives. -Wformat-overflow -Wformat-overflow=1 Level 1 of -Wformat-overflow enabled by -Wformat employs a conservative approach that warns only about calls that most likely overflow the buffer. At this level, numeric arguments to format directives with unknown values are assumed to have the value of one, and strings of unknown length to be empty. Numeric arguments that are known to be bounded to a subrange of their type, or string arguments whose output is bounded either by their directive's precision or by a finite set of string literals, are assumed to take on the value within the range that results in the most bytes on output. For example, the call to "sprintf" below is diagnosed because even with both a and b equal to zero, the terminating NUL character ('\0') appended by the function to the destination buffer will be written past its end. Increasing the size of the buffer by a single byte is sufficient to avoid the warning, though it may not be sufficient to avoid the overflow. void f (int a, int b) { char buf [13]; sprintf (buf, "a = %i, b = %i\n", a, b); } -Wformat-overflow=2 Level 2 warns also about calls that might overflow the destination buffer given an argument of sufficient length or magnitude. At level 2, unknown numeric arguments are assumed to have the minimum representable value for signed types with a precision greater than 1, and the maximum representable value otherwise. Unknown string arguments whose length cannot be assumed to be bounded either by the directive's precision, or by a finite set of string literals they may evaluate to, or the character array they may point to, are assumed to be 1 character long. At level 2, the call in the example above is again diagnosed, but this time because with a equal to a 32-bit "INT_MIN" the first %i directive will write some of its digits beyond the end of the destination buffer. To make the call safe regardless of the values of the two variables, the size of the destination buffer must be increased to at least 34 bytes. GCC includes the minimum size of the buffer in an informational note following the warning. An alternative to increasing the size of the destination buffer is to constrain the range of formatted values. The maximum length of string arguments can be bounded by specifying the precision in the format directive. When numeric arguments of format directives can be assumed to be bounded by less than the precision of their type, choosing an appropriate length modifier to the format specifier will reduce the required buffer size. For example, if a and b in the example above can be assumed to be within the precision of the "short int" type then using either the %hi format directive or casting the argument to "short" reduces the maximum required size of the buffer to 24 bytes. void f (int a, int b) { char buf [23]; sprintf (buf, "a = %hi, b = %i\n", a, (short)b); } -Wno-format-zero-length If -Wformat is specified, do not warn about zero-length formats. The C standard specifies that zero-length formats are allowed. -Wformat=2 Enable -Wformat plus additional format checks. Currently equivalent to -Wformat -Wformat-nonliteral -Wformat-security -Wformat-y2k. -Wformat-nonliteral If -Wformat is specified, also warn if the format string is not a string literal and so cannot be checked, unless the format function takes its format arguments as a "va_list". -Wformat-security If -Wformat is specified, also warn about uses of format functions that represent possible security problems. At present, this warns about calls to "printf" and "scanf" functions where the format string is not a string literal and there are no format arguments, as in "printf (foo);". This may be a security hole if the format string came from untrusted input and contains %n. (This is currently a subset of what -Wformat-nonliteral warns about, but in future warnings may be added to -Wformat-security that are not included in -Wformat-nonliteral.) -Wformat-signedness If -Wformat is specified, also warn if the format string requires an unsigned argument and the argument is signed and vice versa. -Wformat-truncation -Wformat-truncation=level Warn about calls to formatted input/output functions such as "snprintf" and "vsnprintf" that might result in output truncation. When the exact number of bytes written by a format directive cannot be determined at compile-time it is estimated based on heuristics that depend on the level argument and on optimization. While enabling optimization will in most cases improve the accuracy of the warning, it may also result in false positives. Except as noted otherwise, the option uses the same logic -Wformat-overflow. -Wformat-truncation -Wformat-truncation=1 Level 1 of -Wformat-truncation enabled by -Wformat employs a conservative approach that warns only about calls to bounded functions whose return value is unused and that will most likely result in output truncation. -Wformat-truncation=2 Level 2 warns also about calls to bounded functions whose return value is used and that might result in truncation given an argument of sufficient length or magnitude. -Wformat-y2k If -Wformat is specified, also warn about "strftime" formats that may yield only a two-digit year. -Wnonnull Warn about passing a null pointer for arguments marked as requiring a non-null value by the "nonnull" function attribute. -Wnonnull is included in -Wall and -Wformat. It can be disabled with the -Wno-nonnull option. -Wnonnull-compare Warn when comparing an argument marked with the "nonnull" function attribute against null inside the function. -Wnonnull-compare is included in -Wall. It can be disabled with the -Wno-nonnull-compare option. -Wnull-dereference Warn if the compiler detects paths that trigger erroneous or undefined behavior due to dereferencing a null pointer. This option is only active when -fdelete-null-pointer-checks is active, which is enabled by optimizations in most targets. The precision of the warnings depends on the optimization options used. -Winit-self (C, C++, Objective-C and Objective-C++ only) Warn about uninitialized variables that are initialized with themselves. Note this option can only be used with the -Wuninitialized option. For example, GCC warns about "i" being uninitialized in the following snippet only when -Winit-self has been specified: int f() { int i = i; return i; } This warning is enabled by -Wall in C++. -Wimplicit-int (C and Objective-C only) Warn when a declaration does not specify a type. This warning is enabled by -Wall. -Wimplicit-function-declaration (C and Objective-C only) Give a warning whenever a function is used before being declared. In C99 mode (-std=c99 or -std=gnu99), this warning is enabled by default and it is made into an error by -pedantic-errors. This warning is also enabled by -Wall. -Wimplicit (C and Objective-C only) Same as -Wimplicit-int and -Wimplicit-function-declaration. This warning is enabled by -Wall. -Wimplicit-fallthrough -Wimplicit-fallthrough is the same as -Wimplicit-fallthrough=3 and -Wno-implicit-fallthrough is the same as -Wimplicit-fallthrough=0. -Wimplicit-fallthrough=n Warn when a switch case falls through. For example: switch (cond) { case 1: a = 1; break; case 2: a = 2; case 3: a = 3; break; } This warning does not warn when the last statement of a case cannot fall through, e.g. when there is a return statement or a call to function declared with the noreturn attribute. -Wimplicit-fallthrough= also takes into account control flow statements, such as ifs, and only warns when appropriate. E.g. switch (cond) { case 1: if (i > 3) { bar (5); break; } else if (i < 1) { bar (0); } else return; default: ... } Since there are occasions where a switch case fall through is desirable, GCC provides an attribute, "__attribute__ ((fallthrough))", that is to be used along with a null statement to suppress this warning that would normally occur: switch (cond) { case 1: bar (0); __attribute__ ((fallthrough)); default: ... } C++17 provides a standard way to suppress the -Wimplicit-fallthrough warning using "[[fallthrough]];" instead of the GNU attribute. In C++11 or C++14 users can use "[[gnu::fallthrough]];", which is a GNU extension. Instead of these attributes, it is also possible to add a fallthrough comment to silence the warning. The whole body of the C or C++ style comment should match the given regular expressions listed below. The option argument n specifies what kind of comments are accepted: *<-Wimplicit-fallthrough=0 disables the warning altogether.> *<-Wimplicit-fallthrough=1 matches ".*" regular> expression, any comment is used as fallthrough comment. *<-Wimplicit-fallthrough=2 case insensitively matches> ".*falls?[ \t-]*thr(ough|u).*" regular expression. *<-Wimplicit-fallthrough=3 case sensitively matches one of the> following regular expressions: *<"-fallthrough"> *<"@fallthrough@"> *<"lint -fallthrough[ \t]*"> *<"[ \t.!]*(ELSE,? |INTENTIONAL(LY)? )?FALL(S | |-)?THR(OUGH|U)[ \t.!]*(-[^\n\r]*)?"> *<"[ \t.!]*(Else,? |Intentional(ly)? )?Fall((s | |-)[Tt]|t)hr(ough|u)[ \t.!]*(-[^\n\r]*)?"> *<"[ \t.!]*([Ee]lse,? |[Ii]ntentional(ly)? )?fall(s | |-)?thr(ough|u)[ \t.!]*(-[^\n\r]*)?"> *<-Wimplicit-fallthrough=4 case sensitively matches one of the> following regular expressions: *<"-fallthrough"> *<"@fallthrough@"> *<"lint -fallthrough[ \t]*"> *<"[ \t]*FALLTHR(OUGH|U)[ \t]*"> *<-Wimplicit-fallthrough=5 doesn't recognize any comments as> fallthrough comments, only attributes disable the warning. The comment needs to be followed after optional whitespace and other comments by "case" or "default" keywords or by a user label that precedes some "case" or "default" label. switch (cond) { case 1: bar (0); /* FALLTHRU */ default: ... } The -Wimplicit-fallthrough=3 warning is enabled by -Wextra. -Wif-not-aligned (C, C++, Objective-C and Objective-C++ only) Control if warning triggered by the "warn_if_not_aligned" attribute should be issued. This is enabled by default. Use -Wno-if-not-aligned to disable it. -Wignored-qualifiers (C and C++ only) Warn if the return type of a function has a type qualifier such as "const". For ISO C such a type qualifier has no effect, since the value returned by a function is not an lvalue. For C++, the warning is only emitted for scalar types or "void". ISO C prohibits qualified "void" return types on function definitions, so such return types always receive a warning even without this option. This warning is also enabled by -Wextra. -Wignored-attributes (C and C++ only) Warn when an attribute is ignored. This is different from the -Wattributes option in that it warns whenever the compiler decides to drop an attribute, not that the attribute is either unknown, used in a wrong place, etc. This warning is enabled by default. -Wmain Warn if the type of "main" is suspicious. "main" should be a function with external linkage, returning int, taking either zero arguments, two, or three arguments of appropriate types. This warning is enabled by default in C++ and is enabled by either -Wall or -Wpedantic. -Wmisleading-indentation (C and C++ only) Warn when the indentation of the code does not reflect the block structure. Specifically, a warning is issued for "if", "else", "while", and "for" clauses with a guarded statement that does not use braces, followed by an unguarded statement with the same indentation. In the following example, the call to "bar" is misleadingly indented as if it were guarded by the "if" conditional. if (some_condition ()) foo (); bar (); /* Gotcha: this is not guarded by the "if". */ In the case of mixed tabs and spaces, the warning uses the -ftabstop= option to determine if the statements line up (defaulting to 8). The warning is not issued for code involving multiline preprocessor logic such as the following example. if (flagA) foo (0); #if SOME_CONDITION_THAT_DOES_NOT_HOLD if (flagB) #endif foo (1); The warning is not issued after a "#line" directive, since this typically indicates autogenerated code, and no assumptions can be made about the layout of the file that the directive references. This warning is enabled by -Wall in C and C++. -Wmissing-attributes Warn when a declaration of a function is missing one or more attributes that a related function is declared with and whose absence may adversely affect the correctness or efficiency of generated code. For example, the warning is issued for declarations of aliases that use attributes to specify less restrictive requirements than those of their targets. This typically represents a potential optimization opportunity. By contrast, the -Wattribute-alias=2 option controls warnings issued when the alias is more restrictive than the target, which could lead to incorrect code generation. Attributes considered include "alloc_align", "alloc_size", "cold", "const", "hot", "leaf", "malloc", "nonnull", "noreturn", "nothrow", "pure", "returns_nonnull", and "returns_twice". In C++, the warning is issued when an explicit specialization of a primary template declared with attribute "alloc_align", "alloc_size", "assume_aligned", "format", "format_arg", "malloc", or "nonnull" is declared without it. Attributes "deprecated", "error", and "warning" suppress the warning.. You can use the "copy" attribute to apply the same set of attributes to a declaration as that on another declaration without explicitly enumerating the attributes. This attribute can be applied to declarations of functions, variables, or types. -Wmissing-attributes is enabled by -Wall. For example, since the declaration of the primary function template below makes use of both attribute "malloc" and "alloc_size" the declaration of the explicit specialization of the template is diagnosed because it is missing one of the attributes. template <class T> T* __attribute__ ((malloc, alloc_size (1))) allocate (size_t); template <> void* __attribute__ ((malloc)) // missing alloc_size allocate<void> (size_t); -Wmissing-braces Warn if an aggregate or union initializer is not fully bracketed. In the following example, the initializer for "a" is not fully bracketed, but that for "b" is fully bracketed. This warning is enabled by -Wall in C. int a[2][2] = { 0, 1, 2, 3 }; int b[2][2] = { { 0, 1 }, { 2, 3 } }; This warning is enabled by -Wall. -Wmissing-include-dirs (C, C++, Objective-C and Objective-C++ only) Warn if a user-supplied include directory does not exist. -Wmissing-profile Warn if feedback profiles are missing when using the -fprofile-use option. This option diagnoses those cases where a new function or a new file is added to the user code between compiling with -fprofile-generate and with -fprofile-use, without regenerating the profiles. In these cases, the profile feedback data files do not contain any profile feedback information for the newly added function or file respectively. Also, in the case when profile count data (.gcda) files are removed, GCC cannot use any profile feedback information. In all these cases, warnings are issued to inform the user that a profile generation step is due. -Wno-missing-profile can be used to disable the warning. Ignoring the warning can result in poorly optimized code. Completely disabling the warning is not recommended and should be done only when non-existent profile data is justified. -Wmultistatement-macros Warn about unsafe multiple statement macros that appear to be guarded by a clause such as "if", "else", "for", "switch", or "while", in which only the first statement is actually guarded after the macro is expanded. For example: #define DOIT x++; y++ if (c) DOIT; will increment "y" unconditionally, not just when "c" holds. The can usually be fixed by wrapping the macro in a do-while loop: #define DOIT do { x++; y++; } while (0) if (c) DOIT; This warning is enabled by -Wall in C and C++. -Wparentheses Warn if parentheses are omitted in certain contexts, such as when there is an assignment in a context where a truth value is expected, or when operators are nested whose precedence people often get confused about. Also warn if a comparison like "x<=y<=z" appears; this is equivalent to "(x<=y ? 1 : 0) <= z", which is a different interpretation from that of ordinary mathematical notation. Also warn for dangerous uses of the GNU extension to "?:" with omitted middle operand. When the condition in the "?": operator is a boolean expression, the omitted value is always 1. Often programmers expect it to be a value computed inside the conditional expression instead. For C++ this also warns for some cases of unnecessary parentheses in declarations, which can indicate an attempt at a function call instead of a declaration: { // Declares a local variable called mymutex. std::unique_lock<std::mutex> (mymutex); // User meant std::unique_lock<std::mutex> lock (mymutex); } This warning is enabled by -Wall. -Wsequence-point Warn about code that may have undefined semantics because of violations of sequence point rules in the C and C++ standards. The C and C++ standards define the order in which expressions in a C/C++ program are evaluated in terms of sequence points, which represent a partial ordering between the execution of parts of the program: those executed before the sequence point, and those executed after it. These occur after the evaluation of a full expression (one which is not part of a larger expression), after the evaluation of the first operand of a "&&", "||", "? :" or "," (comma) operator, before a function is called (but after the evaluation of its arguments and the expression denoting the called function), and in certain other places. Other than as expressed by the sequence point rules, the order of evaluation of subexpressions of an expression is not specified. All these rules describe only a partial order rather than a total order, since, for example, if two functions are called within one expression with no sequence point between them, the order in which the functions are called is not specified. However, the standards committee have ruled that function calls do not overlap. It is not specified when between sequence points modifications to the values of objects take effect. Programs whose behavior depends on this have undefined behavior; the C and C++ standards specify that "Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. Furthermore, the prior value shall be read only to determine the value to be stored.". If a program breaks these rules, the results on any particular implementation are entirely unpredictable. Examples of code with undefined behavior are "a = a++;", "a[n] = b[n++]" and "a[i++] = i;". Some more complicated cases are not diagnosed by this option, and it may give an occasional false positive result, but in general it has been found fairly effective at detecting this sort of problem in programs. The C++17 standard will define the order of evaluation of operands in more cases: in particular it requires that the right-hand side of an assignment be evaluated before the left-hand side, so the above examples are no longer undefined. But this warning will still warn about them, to help people avoid writing code that is undefined in C and earlier revisions of C++. The standard is worded confusingly, therefore there is some debate over the precise meaning of the sequence point rules in subtle cases. Links to discussions of the problem, including proposed formal definitions, may be found on the GCC readings page, at <http://gcc.gnu.org/readings.html >. This warning is enabled by -Wall for C and C++. -Wno-return-local-addr Do not warn about returning a pointer (or in C++, a reference) to a variable that goes out of scope after the function returns. -Wreturn-type Warn whenever a function is defined with a return type that defaults to "int". Also warn about any "return" statement with no return value in a function whose return type is not "void" (falling off the end of the function body is considered returning without a value). For C only, warn about a "return" statement with an expression in a function whose return type is "void", unless the expression type is also "void". As a GNU extension, the latter case is accepted without a warning unless -Wpedantic is used. Attempting to use the return value of a non-"void" function other than "main" that flows off the end by reaching the closing curly brace that terminates the function is undefined. Unlike in C, in C++, flowing off the end of a non-"void" function other than "main" results in undefined behavior even when the value of the function is not used. This warning is enabled by default in C++ and by -Wall otherwise. -Wshift-count-negative Warn if shift count is negative. This warning is enabled by default. -Wshift-count-overflow Warn if shift count >= width of type. This warning is enabled by default. -Wshift-negative-value Warn if left shifting a negative value. This warning is enabled by -Wextra in C99 (and newer) and C++11 to C++17 modes. -Wshift-overflow -Wshift-overflow=n Warn about left shift overflows. This warning is enabled by default in C99 and C++11 modes (and newer). -Wshift-overflow=1 This is the warning level of -Wshift-overflow and is enabled by default in C99 and C++11 modes (and newer). This warning level does not warn about left-shifting 1 into the sign bit. (However, in C, such an overflow is still rejected in contexts where an integer constant expression is required.) No warning is emitted in C++2A mode (and newer), as signed left shifts always wrap. -Wshift-overflow=2 This warning level also warns about left-shifting 1 into the sign bit, unless C++14 mode (or newer) is active. -Wswitch Warn whenever a "switch" statement has an index of enumerated type and lacks a "case" for one or more of the named codes of that enumeration. (The presence of a "default" label prevents this warning.) "case" labels outside the enumeration range also provoke warnings when this option is used (even if there is a "default" label). This warning is enabled by -Wall. -Wswitch-default Warn whenever a "switch" statement does not have a "default" case. -Wswitch-enum Warn whenever a "switch" statement has an index of enumerated type and lacks a "case" for one or more of the named codes of that enumeration. "case" labels outside the enumeration range also provoke warnings when this option is used. The only difference between -Wswitch and this option is that this option gives a warning about an omitted enumeration code even if there is a "default" label. -Wswitch-bool Warn whenever a "switch" statement has an index of boolean type and the case values are outside the range of a boolean type. It is possible to suppress this warning by casting the controlling expression to a type other than "bool". For example: switch ((int) (a == 4)) { ... } This warning is enabled by default for C and C++ programs. -Wswitch-unreachable Warn whenever a "switch" statement contains statements between the controlling expression and the first case label, which will never be executed. For example: switch (cond) { i = 15; ... case 5: ... } -Wswitch-unreachable does not warn if the statement between the controlling expression and the first case label is just a declaration: switch (cond) { int i; ... case 5: i = 5; ... } This warning is enabled by default for C and C++ programs. -Wsync-nand (C and C++ only) Warn when "__sync_fetch_and_nand" and "__sync_nand_and_fetch" built-in functions are used. These functions changed semantics in GCC 4.4. -Wunused-but-set-parameter Warn whenever a function parameter is assigned to, but otherwise unused (aside from its declaration). To suppress this warning use the "unused" attribute. This warning is also enabled by -Wunused together with -Wextra. -Wunused-but-set-variable Warn whenever a local variable is assigned to, but otherwise unused (aside from its declaration). This warning is enabled by -Wall. To suppress this warning use the "unused" attribute. This warning is also enabled by -Wunused, which is enabled by -Wall. -Wunused-function Warn whenever a static function is declared but not defined or a non-inline static function is unused. This warning is enabled by -Wall. -Wunused-label Warn whenever a label is declared but not used. This warning is enabled by -Wall. To suppress this warning use the "unused" attribute. -Wunused-local-typedefs (C, Objective-C, C++ and Objective-C++ only) Warn when a typedef locally defined in a function is not used. This warning is enabled by -Wall. -Wunused-parameter Warn whenever a function parameter is unused aside from its declaration. To suppress this warning use the "unused" attribute. -Wno-unused-result Do not warn if a caller of a function marked with attribute "warn_unused_result" does not use its return value. The default is -Wunused-result. -Wunused-variable Warn whenever a local or static variable is unused aside from its declaration. This option implies -Wunused-const-variable=1 for C, but not for C++. This warning is enabled by -Wall. To suppress this warning use the "unused" attribute. -Wunused-const-variable -Wunused-const-variable=n Warn whenever a constant static variable is unused aside from its declaration. -Wunused-const-variable=1 is enabled by -Wunused-variable for C, but not for C++. In C this declares variable storage, but in C++ this is not an error since const variables take the place of "#define"s. To suppress this warning use the "unused" attribute. -Wunused-const-variable=1 This is the warning level that is enabled by -Wunused-variable for C. It warns only about unused static const variables defined in the main compilation unit, but not about static const variables declared in any header included. -Wunused-const-variable=2 This warning level also warns for unused constant static variables in headers (excluding system headers). This is the warning level of -Wunused-const-variable and must be explicitly requested since in C++ this isn't an error and in C it might be harder to clean up all headers included. -Wunused-value Warn whenever a statement computes a result that is explicitly not used. To suppress this warning cast the unused expression to "void". This includes an expression-statement or the left-hand side of a comma expression that contains no side effects. For example, an expression such as "x[i,j]" causes a warning, while "x[(void)i,j]" does not. This warning is enabled by -Wall. -Wunused All the above -Wunused options combined. In order to get a warning about an unused function parameter, you must either specify -Wextra -Wunused (note that -Wall implies -Wunused), or separately specify -Wunused-parameter. -Wuninitialized Warn if an automatic variable is used without first being initialized or if a variable may be clobbered by a "setjmp" call. In C++, warn if a non-static reference or non-static "const" member appears in a class without constructors. If you want to warn about code that uses the uninitialized value of the variable in its own initializer, use the -Winit-self option. These warnings occur for individual uninitialized or clobbered elements of structure, union or array variables as well as for variables that are uninitialized or clobbered as a whole. They do not occur for variables or elements declared "volatile". Because these warnings depend on optimization, the exact variables or elements for which there are warnings depends on the precise optimization options and version of GCC used. Note that there may be no warning about a variable that is used only to compute a value that itself is never used, because such computations may be deleted by data flow analysis before the warnings are printed. -Winvalid-memory-model Warn for invocations of __atomic Builtins, __sync Builtins, and the C11 atomic generic functions with a memory consistency argument that is either invalid for the operation or outside the range of values of the "memory_order" enumeration. For example, since the "__atomic_store" and "__atomic_store_n" built-ins are only defined for the relaxed, release, and sequentially consistent memory orders the following code is diagnosed: void store (int *i) { __atomic_store_n (i, 0, memory_order_consume); } -Winvalid-memory-model is enabled by default. -Wmaybe-uninitialized For an automatic (i.e. local) variable, if there exists a path from the function entry to a use of the variable that is initialized, but there exist some other paths for which the variable is not initialized, the compiler emits a warning if it cannot prove the uninitialized paths are not executed at run time. These warnings are only possible in optimizing compilation, because otherwise GCC does not keep track of the state of variables. These warnings are made optional because GCC may not be able to determine when the code is correct in spite of appearing to have an error. Here is one example of how this can happen: { int x; switch (y) { case 1: x = 1; break; case 2: x = 4; break; case 3: x = 5; } foo (x); } If the value of "y" is always 1, 2 or 3, then "x" is always initialized, but GCC doesn't know this. To suppress the warning, you need to provide a default case with assert(0) or similar code. This option also warns when a non-volatile automatic variable might be changed by a call to "longjmp". The compiler sees only the calls to "setjmp". It cannot know where "longjmp" will be called; in fact, a signal handler could call it at any point in the code. As a result, you may get a warning even when there is in fact no problem because "longjmp" cannot in fact be called at the place that would cause a problem. Some spurious warnings can be avoided if you declare all the functions you use that never return as "noreturn". This warning is enabled by -Wall or -Wextra. -Wunknown-pragmas Warn when a "#pragma" directive is encountered that is not understood by GCC. If this command-line option is used, warnings are even issued for unknown pragmas in system header files. This is not the case if the warnings are only enabled by the -Wall command-line option. -Wno-pragmas Do not warn about misuses of pragmas, such as incorrect parameters, invalid syntax, or conflicts between pragmas. See also -Wunknown-pragmas. -Wno-prio-ctor-dtor Do not warn if a priority from 0 to 100 is used for constructor or destructor. The use of constructor and destructor attributes allow you to assign a priority to the constructor/destructor to control its order of execution before "main" is called or after it returns. The priority values must be greater than 100 as the compiler reserves priority values between 0--100 for the implementation. -Wstrict-aliasing This option is only active when -fstrict-aliasing is active. It warns about code that might break the strict aliasing rules that the compiler is using for optimization. The warning does not catch all cases, but does attempt to catch the more common pitfalls. It is included in -Wall. It is equivalent to -Wstrict-aliasing=3 -Wstrict-aliasing=n This option is only active when -fstrict-aliasing is active. It warns about code that might break the strict aliasing rules that the compiler is using for optimization. Higher levels correspond to higher accuracy (fewer false positives). Higher levels also correspond to more effort, similar to the way -O works. -Wstrict-aliasing is equivalent to -Wstrict-aliasing=3. Level 1: Most aggressive, quick, least accurate. Possibly useful when higher levels do not warn but -fstrict-aliasing still breaks the code, as it has very few false negatives. However, it has many false positives. Warns for all pointer conversions between possibly incompatible types, even if never dereferenced. Runs in the front end only. Level 2: Aggressive, quick, not too precise. May still have many false positives (not as many as level 1 though), and few false negatives (but possibly more than level 1). Unlike level 1, it only warns when an address is taken. Warns about incomplete types. Runs in the front end only. Level 3 (default for -Wstrict-aliasing): Should have very few false positives and few false negatives. Slightly slower than levels 1 or 2 when optimization is enabled. Takes care of the common pun+dereference pattern in the front end: "*(int*)&some_float". If optimization is enabled, it also runs in the back end, where it deals with multiple statement cases using flow-sensitive points-to information. Only warns when the converted pointer is dereferenced. Does not warn about incomplete types. -Wstrict-overflow -Wstrict-overflow=n This option is only active when signed overflow is undefined. It warns about cases where the compiler optimizes based on the assumption that signed overflow does not occur. Note that it does not warn about all cases where the code might overflow: it only warns about cases where the compiler implements some optimization. Thus this warning depends on the optimization level. An optimization that assumes that signed overflow does not occur is perfectly safe if the values of the variables involved are such that overflow never does, in fact, occur. Therefore this warning can easily give a false positive: a warning about code that is not actually a problem. To help focus on important issues, several warning levels are defined. No warnings are issued for the use of undefined signed overflow when estimating how many iterations a loop requires, in particular when determining whether a loop will be executed at all. -Wstrict-overflow=1 Warn about cases that are both questionable and easy to avoid. For example the compiler simplifies "x + 1 > x" to 1. This level of -Wstrict-overflow is enabled by -Wall; higher levels are not, and must be explicitly requested. -Wstrict-overflow=2 Also warn about other cases where a comparison is simplified to a constant. For example: "abs (x) >= 0". This can only be simplified when signed integer overflow is undefined, because "abs (INT_MIN)" overflows to "INT_MIN", which is less than zero. -Wstrict-overflow (with no level) is the same as -Wstrict-overflow=2. -Wstrict-overflow=3 Also warn about other cases where a comparison is simplified. For example: "x + 1 > 1" is simplified to "x > 0". -Wstrict-overflow=4 Also warn about other simplifications not covered by the above cases. For example: "(x * 10) / 5" is simplified to "x * 2". -Wstrict-overflow=5 Also warn about cases where the compiler reduces the magnitude of a constant involved in a comparison. For example: "x + 2 > y" is simplified to "x + 1 >= y". This is reported only at the highest warning level because this simplification applies to many comparisons, so this warning level gives a very large number of false positives. -Wstringop-overflow -Wstringop-overflow=type Warn for calls to string manipulation functions such as "memcpy" and "strcpy" that are determined to overflow the destination buffer. The optional argument is one greater than the type of Object Size Checking to perform to determine the size of the destination. The argument is meaningful only for functions that operate on character arrays but not for raw memory functions like "memcpy" which always make use of Object Size type-0. The option also warns for calls that specify a size in excess of the largest possible object or at most "SIZE_MAX / 2" bytes. The option produces the best results with optimization enabled but can detect a small subset of simple buffer overflows even without optimization in calls to the GCC built-in functions like "__builtin_memcpy" that correspond to the standard functions. In any case, the option warns about just a subset of buffer overflows detected by the corresponding overflow checking built-ins. For example, the option will issue a warning for the "strcpy" call below because it copies at least 5 characters (the string "blue" including the terminating NUL) into the buffer of size 4. enum Color { blue, purple, yellow }; const char* f (enum Color clr) { static char buf [4]; const char *str; switch (clr) { case blue: str = "blue"; break; case purple: str = "purple"; break; case yellow: str = "yellow"; break; } return strcpy (buf, str); // warning here } Option -Wstringop-overflow=2 is enabled by default. -Wstringop-overflow -Wstringop-overflow=1 The -Wstringop-overflow=1 option uses type-zero Object Size Checking to determine the sizes of destination objects. This is the default setting of the option. At this setting the option will not warn for writes past the end of subobjects of larger objects accessed by pointers unless the size of the largest surrounding object is known. When the destination may be one of several objects it is assumed to be the largest one of them. On Linux systems, when optimization is enabled at this setting the option warns for the same code as when the "_FORTIFY_SOURCE" macro is defined to a non-zero value. -Wstringop-overflow=2 The -Wstringop-overflow=2 option uses type-one Object Size Checking to determine the sizes of destination objects. At this setting the option will warn about overflows when writing to members of the largest complete objects whose exact size is known. It will, however, not warn for excessive writes to the same members of unknown objects referenced by pointers since they may point to arrays containing unknown numbers of elements. -Wstringop-overflow=3 The -Wstringop-overflow=3 option uses type-two Object Size Checking to determine the sizes of destination objects. At this setting the option warns about overflowing the smallest object or data member. This is the most restrictive setting of the option that may result in warnings for safe code. -Wstringop-overflow=4 The -Wstringop-overflow=4 option uses type-three Object Size Checking to determine the sizes of destination objects. At this setting the option will warn about overflowing any data members, and when the destination is one of several objects it uses the size of the largest of them to decide whether to issue a warning. Similarly to -Wstringop-overflow=3 this setting of the option may result in warnings for benign code. -Wstringop-truncation Warn for calls to bounded string manipulation functions such as "strncat", "strncpy", and "stpncpy" that may either truncate the copied string or leave the destination unchanged. In the following example, the call to "strncat" specifies a bound that is less than the length of the source string. As a result, the copy of the source will be truncated and so the call is diagnosed. To avoid the warning use "bufsize - strlen (buf) - 1)" as the bound. void append (char *buf, size_t bufsize) { strncat (buf, ".txt", 3); } As another example, the following call to "strncpy" results in copying to "d" just the characters preceding the terminating NUL, without appending the NUL to the end. Assuming the result of "strncpy" is necessarily a NUL- terminated string is a common mistake, and so the call is diagnosed. To avoid the warning when the result is not expected to be NUL-terminated, call "memcpy" instead. void copy (char *d, const char *s) { strncpy (d, s, strlen (s)); } In the following example, the call to "strncpy" specifies the size of the destination buffer as the bound. If the length of the source string is equal to or greater than this size the result of the copy will not be NUL-terminated. Therefore, the call is also diagnosed. To avoid the warning, specify "sizeof buf - 1" as the bound and set the last element of the buffer to "NUL". void copy (const char *s) { char buf[80]; strncpy (buf, s, sizeof buf); ... } In situations where a character array is intended to store a sequence of bytes with no terminating "NUL" such an array may be annotated with attribute "nonstring" to avoid this warning. Such arrays, however, are not suitable arguments to functions that expect "NUL"-terminated strings. To help detect accidental misuses of such arrays GCC issues warnings unless it can prove that the use is safe. -Wsuggest-attribute=[pure|const|noreturn|format|cold|malloc] Warn for cases where adding an attribute may be beneficial. The attributes currently supported are listed below. -Wsuggest-attribute=pure -Wsuggest-attribute=const -Wsuggest-attribute=noreturn -Wmissing-noreturn -Wsuggest-attribute=malloc Warn about functions that might be candidates for attributes "pure", "const" or "noreturn" or "malloc". The compiler only warns for functions visible in other compilation units or (in the case of "pure" and "const") if it cannot prove that the function returns normally. A function returns normally if it doesn't contain an infinite loop or return abnormally by throwing, calling "abort" or trapping. This analysis requires option -fipa-pure-const, which is enabled by default at -O and higher. Higher optimization levels improve the accuracy of the analysis. -Wsuggest-attribute=format -Wmissing-format-attribute Warn about function pointers that might be candidates for "format" attributes. Note these are only possible candidates, not absolute ones. GCC guesses that function pointers with "format" attributes that are used in assignment, initialization, parameter passing or return statements should have a corresponding "format" attribute in the resulting type. I.e. the left-hand side of the assignment or initialization, the type of the parameter variable, or the return type of the containing function respectively should also have a "format" attribute to avoid the warning. GCC also warns about function definitions that might be candidates for "format" attributes. Again, these are only possible candidates. GCC guesses that "format" attributes might be appropriate for any function that calls a function like "vprintf" or "vscanf", but this might not always be the case, and some functions for which "format" attributes are appropriate may not be detected. -Wsuggest-attribute=cold Warn about functions that might be candidates for "cold" attribute. This is based on static detection and generally will only warn about functions which always leads to a call to another "cold" function such as wrappers of C++ "throw" or fatal error reporting functions leading to "abort". -Wsuggest-final-types Warn about types with virtual methods where code quality would be improved if the type were declared with the C++11 "final" specifier, or, if possible, declared in an anonymous namespace. This allows GCC to more aggressively devirtualize the polymorphic calls. This warning is more effective with link time optimization, where the information about the class hierarchy graph is more complete. -Wsuggest-final-methods Warn about virtual methods where code quality would be improved if the method were declared with the C++11 "final" specifier, or, if possible, its type were declared in an anonymous namespace or with the "final" specifier. This warning is more effective with link-time optimization, where the information about the class hierarchy graph is more complete. It is recommended to first consider suggestions of -Wsuggest-final-types and then rebuild with new annotations. -Wsuggest-override Warn about overriding virtual functions that are not marked with the override keyword. -Walloc-zero Warn about calls to allocation functions decorated with attribute "alloc_size" that specify zero bytes, including those to the built-in forms of the functions "aligned_alloc", "alloca", "calloc", "malloc", and "realloc". Because the behavior of these functions when called with a zero size differs among implementations (and in the case of "realloc" has been deprecated) relying on it may result in subtle portability bugs and should be avoided. -Walloc-size-larger-than=byte-size Warn about calls to functions decorated with attribute "alloc_size" that attempt to allocate objects larger than the specified number of bytes, or where the result of the size computation in an integer type with infinite precision would exceed the value of PTRDIFF_MAX on the target. -Walloc-size-larger-than=PTRDIFF_MAX is enabled by default. Warnings controlled by the option can be disabled either by specifying byte-size of SIZE_MAX or more or by -Wno-alloc-size-larger-than. -Wno-alloc-size-larger-than Disable -Walloc-size-larger-than= warnings. The option is equivalent to -Walloc-size-larger-than=SIZE_MAX or larger. -Walloca This option warns on all uses of "alloca" in the source. -Walloca-larger-than=byte-size This option warns on calls to "alloca" with an integer argument whose value is either zero, or that is not bounded by a controlling predicate that limits its value to at most byte-size. It also warns for calls to "alloca" where the bound value is unknown. Arguments of non-integer types are considered unbounded even if they appear to be constrained to the expected range. For example, a bounded case of "alloca" could be: void func (size_t n) { void *p; if (n <= 1000) p = alloca (n); else p = malloc (n); f (p); } In the above example, passing "-Walloca-larger-than=1000" would not issue a warning because the call to "alloca" is known to be at most 1000 bytes. However, if "-Walloca-larger-than=500" were passed, the compiler would emit a warning. Unbounded uses, on the other hand, are uses of "alloca" with no controlling predicate constraining its integer argument. For example: void func () { void *p = alloca (n); f (p); } If "-Walloca-larger-than=500" were passed, the above would trigger a warning, but this time because of the lack of bounds checking. Note, that even seemingly correct code involving signed integers could cause a warning: void func (signed int n) { if (n < 500) { p = alloca (n); f (p); } } In the above example, n could be negative, causing a larger than expected argument to be implicitly cast into the "alloca" call. This option also warns when "alloca" is used in a loop. -Walloca-larger-than=PTRDIFF_MAX is enabled by default but is usually only effective when -ftree-vrp is active (default for -O2 and above). See also -Wvla-larger-than=byte-size. -Wno-alloca-larger-than Disable -Walloca-larger-than= warnings. The option is equivalent to -Walloca-larger-than=SIZE_MAX or larger. -Warray-bounds -Warray-bounds=n This option is only active when -ftree-vrp is active (default for -O2 and above). It warns about subscripts to arrays that are always out of bounds. This warning is enabled by -Wall. -Warray-bounds=1 This is the warning level of -Warray-bounds and is enabled by -Wall; higher levels are not, and must be explicitly requested. -Warray-bounds=2 This warning level also warns about out of bounds access for arrays at the end of a struct and for arrays accessed through pointers. This warning level may give a larger number of false positives and is deactivated by default. -Wattribute-alias=n -Wno-attribute-alias Warn about declarations using the "alias" and similar attributes whose target is incompatible with the type of the alias. -Wattribute-alias=1 The default warning level of the -Wattribute-alias option diagnoses incompatibilities between the type of the alias declaration and that of its target. Such incompatibilities are typically indicative of bugs. -Wattribute-alias=2 At this level -Wattribute-alias also diagnoses cases where the attributes of the alias declaration are more restrictive than the attributes applied to its target. These mismatches can potentially result in incorrect code generation. In other cases they may be benign and could be resolved simply by adding the missing attribute to the target. For comparison, see the -Wmissing-attributes option, which controls diagnostics when the alias declaration is less restrictive than the target, rather than more restrictive. Attributes considered include "alloc_align", "alloc_size", "cold", "const", "hot", "leaf", "malloc", "nonnull", "noreturn", "nothrow", "pure", "returns_nonnull", and "returns_twice". -Wattribute-alias is equivalent to -Wattribute-alias=1. This is the default. You can disable these warnings with either -Wno-attribute-alias or -Wattribute-alias=0. -Wbool-compare Warn about boolean expression compared with an integer value different from "true"/"false". For instance, the following comparison is always false: int n = 5; ... if ((n > 1) == 2) { ... } This warning is enabled by -Wall. -Wbool-operation Warn about suspicious operations on expressions of a boolean type. For instance, bitwise negation of a boolean is very likely a bug in the program. For C, this warning also warns about incrementing or decrementing a boolean, which rarely makes sense. (In C++, decrementing a boolean is always invalid. Incrementing a boolean is invalid in C++17, and deprecated otherwise.) This warning is enabled by -Wall. -Wduplicated-branches Warn when an if-else has identical branches. This warning detects cases like if (p != NULL) return 0; else return 0; It doesn't warn when both branches contain just a null statement. This warning also warn for conditional operators: int i = x ? *p : *p; -Wduplicated-cond Warn about duplicated conditions in an if-else-if chain. For instance, warn for the following code: if (p->q != NULL) { ... } else if (p->q != NULL) { ... } -Wframe-address Warn when the __builtin_frame_address or __builtin_return_address is called with an argument greater than 0. Such calls may return indeterminate values or crash the program. The warning is included in -Wall. -Wno-discarded-qualifiers (C and Objective-C only) Do not warn if type qualifiers on pointers are being discarded. Typically, the compiler warns if a "const char *" variable is passed to a function that takes a "char *" parameter. This option can be used to suppress such a warning. -Wno-discarded-array-qualifiers (C and Objective-C only) Do not warn if type qualifiers on arrays which are pointer targets are being discarded. Typically, the compiler warns if a "const int (*)[]" variable is passed to a function that takes a "int (*)[]" parameter. This option can be used to suppress such a warning. -Wno-incompatible-pointer-types (C and Objective-C only) Do not warn when there is a conversion between pointers that have incompatible types. This warning is for cases not covered by -Wno-pointer-sign, which warns for pointer argument passing or assignment with different signedness. -Wno-int-conversion (C and Objective-C only) Do not warn about incompatible integer to pointer and pointer to integer conversions. This warning is about implicit conversions; for explicit conversions the warnings -Wno-int-to-pointer-cast and -Wno-pointer-to-int-cast may be used. -Wno-div-by-zero Do not warn about compile-time integer division by zero. Floating-point division by zero is not warned about, as it can be a legitimate way of obtaining infinities and NaNs. -Wsystem-headers Print warning messages for constructs found in system header files. Warnings from system headers are normally suppressed, on the assumption that they usually do not indicate real problems and would only make the compiler output harder to read. Using this command-line option tells GCC to emit warnings from system headers as if they occurred in user code. However, note that using -Wall in conjunction with this option does not warn about unknown pragmas in system headers---for that, -Wunknown-pragmas must also be used. -Wtautological-compare Warn if a self-comparison always evaluates to true or false. This warning detects various mistakes such as: int i = 1; ... if (i > i) { ... } This warning also warns about bitwise comparisons that always evaluate to true or false, for instance: if ((a & 16) == 10) { ... } will always be false. This warning is enabled by -Wall. -Wtrampolines Warn about trampolines generated for pointers to nested functions. A trampoline is a small piece of data or code that is created at run time on the stack when the address of a nested function is taken, and is used to call the nested function indirectly. For some targets, it is made up of data only and thus requires no special treatment. But, for most targets, it is made up of code and thus requires the stack to be made executable in order for the program to work properly. -Wfloat-equal Warn if floating-point values are used in equality comparisons. The idea behind this is that sometimes it is convenient (for the programmer) to consider floating-point values as approximations to infinitely precise real numbers. If you are doing this, then you need to compute (by analyzing the code, or in some other way) the maximum or likely maximum error that the computation introduces, and allow for it when performing comparisons (and when producing output, but that's a different problem). In particular, instead of testing for equality, you should check to see whether the two values have ranges that overlap; and this is done with the relational operators, so equality comparisons are probably mistaken. -Wtraditional (C and Objective-C only) Warn about certain constructs that behave differently in traditional and ISO C. Also warn about ISO C constructs that have no traditional C equivalent, and/or problematic constructs that should be avoided. * Macro parameters that appear within string literals in the macro body. In traditional C macro replacement takes place within string literals, but in ISO C it does not. * In traditional C, some preprocessor directives did not exist. Traditional preprocessors only considered a line to be a directive if the # appeared in column 1 on the line. Therefore -Wtraditional warns about directives that traditional C understands but ignores because the # does not appear as the first character on the line. It also suggests you hide directives like "#pragma" not understood by traditional C by indenting them. Some traditional implementations do not recognize "#elif", so this option suggests avoiding it altogether. * A function-like macro that appears without arguments. * The unary plus operator. * The U integer constant suffix, or the F or L floating- point constant suffixes. (Traditional C does support the L suffix on integer constants.) Note, these suffixes appear in macros defined in the system headers of most modern systems, e.g. the _MIN/_MAX macros in "<limits.h>". Use of these macros in user code might normally lead to spurious warnings, however GCC's integrated preprocessor has enough context to avoid warning in these cases. * A function declared external in one block and then used after the end of the block. * A "switch" statement has an operand of type "long". * A non-"static" function declaration follows a "static" one. This construct is not accepted by some traditional C compilers. * The ISO type of an integer constant has a different width or signedness from its traditional type. This warning is only issued if the base of the constant is ten. I.e. hexadecimal or octal values, which typically represent bit patterns, are not warned about. * Usage of ISO string concatenation is detected. * Initialization of automatic aggregates. * Identifier conflicts with labels. Traditional C lacks a separate namespace for labels. * Initialization of unions. If the initializer is zero, the warning is omitted. This is done under the assumption that the zero initializer in user code appears conditioned on e.g. "__STDC__" to avoid missing initializer warnings and relies on default initialization to zero in the traditional C case. * Conversions by prototypes between fixed/floating-point values and vice versa. The absence of these prototypes when compiling with traditional C causes serious problems. This is a subset of the possible conversion warnings; for the full set use -Wtraditional-conversion. * Use of ISO C style function definitions. This warning intentionally is not issued for prototype declarations or variadic functions because these ISO C features appear in your code when using libiberty's traditional C compatibility macros, "PARAMS" and "VPARAMS". This warning is also bypassed for nested functions because that feature is already a GCC extension and thus not relevant to traditional C compatibility. -Wtraditional-conversion (C and Objective-C only) Warn if a prototype causes a type conversion that is different from what would happen to the same argument in the absence of a prototype. This includes conversions of fixed point to floating and vice versa, and conversions changing the width or signedness of a fixed-point argument except when the same as the default promotion. -Wdeclaration-after-statement (C and Objective-C only) Warn when a declaration is found after a statement in a block. This construct, known from C++, was introduced with ISO C99 and is by default allowed in GCC. It is not supported by ISO C90. -Wshadow Warn whenever a local variable or type declaration shadows another variable, parameter, type, class member (in C++), or instance variable (in Objective-C) or whenever a built-in function is shadowed. Note that in C++, the compiler warns if a local variable shadows an explicit typedef, but not if it shadows a struct/class/enum. Same as -Wshadow=global. -Wno-shadow-ivar (Objective-C only) Do not warn whenever a local variable shadows an instance variable in an Objective-C method. -Wshadow=global The default for -Wshadow. Warns for any (global) shadowing. -Wshadow=local Warn when a local variable shadows another local variable or parameter. This warning is enabled by -Wshadow=global. -Wshadow=compatible-local Warn when a local variable shadows another local variable or parameter whose type is compatible with that of the shadowing variable. In C++, type compatibility here means the type of the shadowing variable can be converted to that of the shadowed variable. The creation of this flag (in addition to -Wshadow=local) is based on the idea that when a local variable shadows another one of incompatible type, it is most likely intentional, not a bug or typo, as shown in the following example: for (SomeIterator i = SomeObj.begin(); i != SomeObj.end(); ++i) { for (int i = 0; i < N; ++i) { ... } ... } Since the two variable "i" in the example above have incompatible types, enabling only -Wshadow=compatible-local will not emit a warning. Because their types are incompatible, if a programmer accidentally uses one in place of the other, type checking will catch that and emit an error or warning. So not warning (about shadowing) in this case will not lead to undetected bugs. Use of this flag instead of -Wshadow=local can possibly reduce the number of warnings triggered by intentional shadowing. This warning is enabled by -Wshadow=local. -Wlarger-than=byte-size Warn whenever an object is defined whose size exceeds byte- size. -Wlarger-than=PTRDIFF_MAX is enabled by default. Warnings controlled by the option can be disabled either by specifying byte-size of SIZE_MAX or more or by -Wno-larger-than. -Wno-larger-than Disable -Wlarger-than= warnings. The option is equivalent to -Wlarger-than=SIZE_MAX or larger. -Wframe-larger-than=byte-size Warn if the size of a function frame exceeds byte-size. The computation done to determine the stack frame size is approximate and not conservative. The actual requirements may be somewhat greater than byte-size even if you do not get a warning. In addition, any space allocated via "alloca", variable-length arrays, or related constructs is not included by the compiler when determining whether or not to issue a warning. -Wframe-larger-than=PTRDIFF_MAX is enabled by default. Warnings controlled by the option can be disabled either by specifying byte-size of SIZE_MAX or more or by -Wno-frame-larger-than. -Wno-frame-larger-than Disable -Wframe-larger-than= warnings. The option is equivalent to -Wframe-larger-than=SIZE_MAX or larger. -Wno-free-nonheap-object Do not warn when attempting to free an object that was not allocated on the heap. -Wstack-usage=byte-size Warn if the stack usage of a function might exceed byte-size. The computation done to determine the stack usage is conservative. Any space allocated via "alloca", variable- length arrays, or related constructs is included by the compiler when determining whether or not to issue a warning. The message is in keeping with the output of -fstack-usage. * If the stack usage is fully static but exceeds the specified amount, it's: warning: stack usage is 1120 bytes * If the stack usage is (partly) dynamic but bounded, it's: warning: stack usage might be 1648 bytes * If the stack usage is (partly) dynamic and not bounded, it's: warning: stack usage might be unbounded -Wstack-usage=PTRDIFF_MAX is enabled by default. Warnings controlled by the option can be disabled either by specifying byte-size of SIZE_MAX or more or by -Wno-stack-usage. -Wno-stack-usage Disable -Wstack-usage= warnings. The option is equivalent to -Wstack-usage=SIZE_MAX or larger. -Wunsafe-loop-optimizations Warn if the loop cannot be optimized because the compiler cannot assume anything on the bounds of the loop indices. With -funsafe-loop-optimizations warn if the compiler makes such assumptions. -Wno-pedantic-ms-format (MinGW targets only) When used in combination with -Wformat and -pedantic without GNU extensions, this option disables the warnings about non- ISO "printf" / "scanf" format width specifiers "I32", "I64", and "I" used on Windows targets, which depend on the MS runtime. -Waligned-new Warn about a new-expression of a type that requires greater alignment than the "alignof(std::max_align_t)" but uses an allocation function without an explicit alignment parameter. This option is enabled by -Wall. Normally this only warns about global allocation functions, but -Waligned-new=all also warns about class member allocation functions. -Wplacement-new -Wplacement-new=n Warn about placement new expressions with undefined behavior, such as constructing an object in a buffer that is smaller than the type of the object. For example, the placement new expression below is diagnosed because it attempts to construct an array of 64 integers in a buffer only 64 bytes large. char buf [64]; new (buf) int[64]; This warning is enabled by default. -Wplacement-new=1 This is the default warning level of -Wplacement-new. At this level the warning is not issued for some strictly undefined constructs that GCC allows as extensions for compatibility with legacy code. For example, the following "new" expression is not diagnosed at this level even though it has undefined behavior according to the C++ standard because it writes past the end of the one- element array. struct S { int n, a[1]; }; S *s = (S *)malloc (sizeof *s + 31 * sizeof s->a[0]); new (s->a)int [32](); -Wplacement-new=2 At this level, in addition to diagnosing all the same constructs as at level 1, a diagnostic is also issued for placement new expressions that construct an object in the last member of structure whose type is an array of a single element and whose size is less than the size of the object being constructed. While the previous example would be diagnosed, the following construct makes use of the flexible member array extension to avoid the warning at level 2. struct S { int n, a[]; }; S *s = (S *)malloc (sizeof *s + 32 * sizeof s->a[0]); new (s->a)int [32](); -Wpointer-arith Warn about anything that depends on the "size of" a function type or of "void". GNU C assigns these types a size of 1, for convenience in calculations with "void *" pointers and pointers to functions. In C++, warn also when an arithmetic operation involves "NULL". This warning is also enabled by -Wpedantic. -Wpointer-compare Warn if a pointer is compared with a zero character constant. This usually means that the pointer was meant to be dereferenced. For example: const char *p = foo (); if (p == '\0') return 42; Note that the code above is invalid in C++11. This warning is enabled by default. -Wtype-limits Warn if a comparison is always true or always false due to the limited range of the data type, but do not warn for constant expressions. For example, warn if an unsigned variable is compared against zero with "<" or ">=". This warning is also enabled by -Wextra. -Wabsolute-value (C and Objective-C only) Warn for calls to standard functions that compute the absolute value of an argument when a more appropriate standard function is available. For example, calling "abs(3.14)" triggers the warning because the appropriate function to call to compute the absolute value of a double argument is "fabs". The option also triggers warnings when the argument in a call to such a function has an unsigned type. This warning can be suppressed with an explicit type cast and it is also enabled by -Wextra. -Wcomment -Wcomments Warn whenever a comment-start sequence /* appears in a /* comment, or whenever a backslash-newline appears in a // comment. This warning is enabled by -Wall. -Wtrigraphs Warn if any trigraphs are encountered that might change the meaning of the program. Trigraphs within comments are not warned about, except those that would form escaped newlines. This option is implied by -Wall. If -Wall is not given, this option is still enabled unless trigraphs are enabled. To get trigraph conversion without warnings, but get the other -Wall warnings, use -trigraphs -Wall -Wno-trigraphs. -Wundef Warn if an undefined identifier is evaluated in an "#if" directive. Such identifiers are replaced with zero. -Wexpansion-to-defined Warn whenever defined is encountered in the expansion of a macro (including the case where the macro is expanded by an #if directive). Such usage is not portable. This warning is also enabled by -Wpedantic and -Wextra. -Wunused-macros Warn about macros defined in the main file that are unused. A macro is used if it is expanded or tested for existence at least once. The preprocessor also warns if the macro has not been used at the time it is redefined or undefined. Built-in macros, macros defined on the command line, and macros defined in include files are not warned about. Note: If a macro is actually used, but only used in skipped conditional blocks, then the preprocessor reports it as unused. To avoid the warning in such a case, you might improve the scope of the macro's definition by, for example, moving it into the first skipped block. Alternatively, you could provide a dummy use with something like: #if defined the_macro_causing_the_warning #endif -Wno-endif-labels Do not warn whenever an "#else" or an "#endif" are followed by text. This sometimes happens in older programs with code of the form #if FOO ... #else FOO ... #endif FOO The second and third "FOO" should be in comments. This warning is on by default. -Wbad-function-cast (C and Objective-C only) Warn when a function call is cast to a non-matching type. For example, warn if a call to a function returning an integer type is cast to a pointer type. -Wc90-c99-compat (C and Objective-C only) Warn about features not present in ISO C90, but present in ISO C99. For instance, warn about use of variable length arrays, "long long" type, "bool" type, compound literals, designated initializers, and so on. This option is independent of the standards mode. Warnings are disabled in the expression that follows "__extension__". -Wc99-c11-compat (C and Objective-C only) Warn about features not present in ISO C99, but present in ISO C11. For instance, warn about use of anonymous structures and unions, "_Atomic" type qualifier, "_Thread_local" storage-class specifier, "_Alignas" specifier, "Alignof" operator, "_Generic" keyword, and so on. This option is independent of the standards mode. Warnings are disabled in the expression that follows "__extension__". -Wc11-c2x-compat (C and Objective-C only) Warn about features not present in ISO C11, but present in ISO C2X. For instance, warn about omitting the string in "_Static_assert". This option is independent of the standards mode. Warnings are disabled in the expression that follows "__extension__". -Wc++-compat (C and Objective-C only) Warn about ISO C constructs that are outside of the common subset of ISO C and ISO C++, e.g. request for implicit conversion from "void *" to a pointer to non-"void" type. -Wc++11-compat (C++ and Objective-C++ only) Warn about C++ constructs whose meaning differs between ISO C++ 1998 and ISO C++ 2011, e.g., identifiers in ISO C++ 1998 that are keywords in ISO C++ 2011. This warning turns on -Wnarrowing and is enabled by -Wall. -Wc++14-compat (C++ and Objective-C++ only) Warn about C++ constructs whose meaning differs between ISO C++ 2011 and ISO C++ 2014. This warning is enabled by -Wall. -Wc++17-compat (C++ and Objective-C++ only) Warn about C++ constructs whose meaning differs between ISO C++ 2014 and ISO C++ 2017. This warning is enabled by -Wall. -Wcast-qual Warn whenever a pointer is cast so as to remove a type qualifier from the target type. For example, warn if a "const char *" is cast to an ordinary "char *". Also warn when making a cast that introduces a type qualifier in an unsafe way. For example, casting "char **" to "const char **" is unsafe, as in this example: /* p is char ** value. */ const char **q = (const char **) p; /* Assignment of readonly string to const char * is OK. */ *q = "string"; /* Now char** pointer points to read-only memory. */ **p = 'b'; -Wcast-align Warn whenever a pointer is cast such that the required alignment of the target is increased. For example, warn if a "char *" is cast to an "int *" on machines where integers can only be accessed at two- or four-byte boundaries. -Wcast-align=strict Warn whenever a pointer is cast such that the required alignment of the target is increased. For example, warn if a "char *" is cast to an "int *" regardless of the target machine. -Wcast-function-type Warn when a function pointer is cast to an incompatible function pointer. In a cast involving function types with a variable argument list only the types of initial arguments that are provided are considered. Any parameter of pointer- type matches any other pointer-type. Any benign differences in integral types are ignored, like "int" vs. "long" on ILP32 targets. Likewise type qualifiers are ignored. The function type "void (*) (void)" is special and matches everything, which can be used to suppress this warning. In a cast involving pointer to member types this warning warns whenever the type cast is changing the pointer to member type. This warning is enabled by -Wextra. -Wwrite-strings When compiling C, give string constants the type "const char[length]" so that copying the address of one into a non-"const" "char *" pointer produces a warning. These warnings help you find at compile time code that can try to write into a string constant, but only if you have been very careful about using "const" in declarations and prototypes. Otherwise, it is just a nuisance. This is why we did not make -Wall request these warnings. When compiling C++, warn about the deprecated conversion from string literals to "char *". This warning is enabled by default for C++ programs. -Wcatch-value -Wcatch-value=n (C++ and Objective-C++ only) Warn about catch handlers that do not catch via reference. With -Wcatch-value=1 (or -Wcatch-value for short) warn about polymorphic class types that are caught by value. With -Wcatch-value=2 warn about all class types that are caught by value. With -Wcatch-value=3 warn about all types that are not caught by reference. -Wcatch-value is enabled by -Wall. -Wclobbered Warn for variables that might be changed by "longjmp" or "vfork". This warning is also enabled by -Wextra. -Wconditionally-supported (C++ and Objective-C++ only) Warn for conditionally-supported (C++11 [intro.defs]) constructs. -Wconversion Warn for implicit conversions that may alter a value. This includes conversions between real and integer, like "abs (x)" when "x" is "double"; conversions between signed and unsigned, like "unsigned ui = -1"; and conversions to smaller types, like "sqrtf (M_PI)". Do not warn for explicit casts like "abs ((int) x)" and "ui = (unsigned) -1", or if the value is not changed by the conversion like in "abs (2.0)". Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion. For C++, also warn for confusing overload resolution for user-defined conversions; and conversions that never use a type conversion operator: conversions to "void", the same type, a base class or a reference to them. Warnings about conversions between signed and unsigned integers are disabled by default in C++ unless -Wsign-conversion is explicitly enabled. -Wno-conversion-null (C++ and Objective-C++ only) Do not warn for conversions between "NULL" and non-pointer types. -Wconversion-null is enabled by default. -Wzero-as-null-pointer-constant (C++ and Objective-C++ only) Warn when a literal 0 is used as null pointer constant. This can be useful to facilitate the conversion to "nullptr" in C++11. -Wsubobject-linkage (C++ and Objective-C++ only) Warn if a class type has a base or a field whose type uses the anonymous namespace or depends on a type with no linkage. If a type A depends on a type B with no or internal linkage, defining it in multiple translation units would be an ODR violation because the meaning of B is different in each translation unit. If A only appears in a single translation unit, the best way to silence the warning is to give it internal linkage by putting it in an anonymous namespace as well. The compiler doesn't give this warning for types defined in the main .C file, as those are unlikely to have multiple definitions. -Wsubobject-linkage is enabled by default. -Wdangling-else Warn about constructions where there may be confusion to which "if" statement an "else" branch belongs. Here is an example of such a case: { if (a) if (b) foo (); else bar (); } In C/C++, every "else" branch belongs to the innermost possible "if" statement, which in this example is "if (b)". This is often not what the programmer expected, as illustrated in the above example by indentation the programmer chose. When there is the potential for this confusion, GCC issues a warning when this flag is specified. To eliminate the warning, add explicit braces around the innermost "if" statement so there is no way the "else" can belong to the enclosing "if". The resulting code looks like this: { if (a) { if (b) foo (); else bar (); } } This warning is enabled by -Wparentheses. -Wdate-time Warn when macros "__TIME__", "__DATE__" or "__TIMESTAMP__" are encountered as they might prevent bit-wise-identical reproducible compilations. -Wdelete-incomplete (C++ and Objective-C++ only) Warn when deleting a pointer to incomplete type, which may cause undefined behavior at runtime. This warning is enabled by default. -Wuseless-cast (C++ and Objective-C++ only) Warn when an expression is casted to its own type. -Wempty-body Warn if an empty body occurs in an "if", "else" or "do while" statement. This warning is also enabled by -Wextra. -Wenum-compare Warn about a comparison between values of different enumerated types. In C++ enumerated type mismatches in conditional expressions are also diagnosed and the warning is enabled by default. In C this warning is enabled by -Wall. -Wextra-semi (C++, Objective-C++ only) Warn about redundant semicolon after in-class function definition. -Wjump-misses-init (C, Objective-C only) Warn if a "goto" statement or a "switch" statement jumps forward across the initialization of a variable, or jumps backward to a label after the variable has been initialized. This only warns about variables that are initialized when they are declared. This warning is only supported for C and Objective-C; in C++ this sort of branch is an error in any case. -Wjump-misses-init is included in -Wc++-compat. It can be disabled with the -Wno-jump-misses-init option. -Wsign-compare Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. In C++, this warning is also enabled by -Wall. In C, it is also enabled by -Wextra. -Wsign-conversion Warn for implicit conversions that may change the sign of an integer value, like assigning a signed integer expression to an unsigned integer variable. An explicit cast silences the warning. In C, this option is enabled also by -Wconversion. -Wfloat-conversion Warn for implicit conversions that reduce the precision of a real value. This includes conversions from real to integer, and from higher precision real to lower precision real values. This option is also enabled by -Wconversion. -Wno-scalar-storage-order Do not warn on suspicious constructs involving reverse scalar storage order. -Wsized-deallocation (C++ and Objective-C++ only) Warn about a definition of an unsized deallocation function void operator delete (void *) noexcept; void operator delete[] (void *) noexcept; without a definition of the corresponding sized deallocation function void operator delete (void *, std::size_t) noexcept; void operator delete[] (void *, std::size_t) noexcept; or vice versa. Enabled by -Wextra along with -fsized-deallocation. -Wsizeof-pointer-div Warn for suspicious divisions of two sizeof expressions that divide the pointer size by the element size, which is the usual way to compute the array size but won't work out correctly with pointers. This warning warns e.g. about "sizeof (ptr) / sizeof (ptr[0])" if "ptr" is not an array, but a pointer. This warning is enabled by -Wall. -Wsizeof-pointer-memaccess Warn for suspicious length parameters to certain string and memory built-in functions if the argument uses "sizeof". This warning triggers for example for "memset (ptr, 0, sizeof (ptr));" if "ptr" is not an array, but a pointer, and suggests a possible fix, or about "memcpy (&foo, ptr, sizeof (&foo));". -Wsizeof-pointer-memaccess also warns about calls to bounded string copy functions like "strncat" or "strncpy" that specify as the bound a "sizeof" expression of the source array. For example, in the following function the call to "strncat" specifies the size of the source string as the bound. That is almost certainly a mistake and so the call is diagnosed. void make_file (const char *name) { char path[PATH_MAX]; strncpy (path, name, sizeof path - 1); strncat (path, ".text", sizeof ".text"); ... } The -Wsizeof-pointer-memaccess option is enabled by -Wall. -Wsizeof-array-argument Warn when the "sizeof" operator is applied to a parameter that is declared as an array in a function definition. This warning is enabled by default for C and C++ programs. -Wmemset-elt-size Warn for suspicious calls to the "memset" built-in function, if the first argument references an array, and the third argument is a number equal to the number of elements, but not equal to the size of the array in memory. This indicates that the user has omitted a multiplication by the element size. This warning is enabled by -Wall. -Wmemset-transposed-args Warn for suspicious calls to the "memset" built-in function where the second argument is not zero and the third argument is zero. For example, the call "memset (buf, sizeof buf, 0)" is diagnosed because "memset (buf, 0, sizeof buf)" was meant instead. The diagnostic is only emitted if the third argument is a literal zero. Otherwise, if it is an expression that is folded to zero, or a cast of zero to some type, it is far less likely that the arguments have been mistakenly transposed and no warning is emitted. This warning is enabled by -Wall. -Waddress Warn about suspicious uses of memory addresses. These include using the address of a function in a conditional expression, such as "void func(void); if (func)", and comparisons against the memory address of a string literal, such as "if (x == "abc")". Such uses typically indicate a programmer error: the address of a function always evaluates to true, so their use in a conditional usually indicate that the programmer forgot the parentheses in a function call; and comparisons against string literals result in unspecified behavior and are not portable in C, so they usually indicate that the programmer intended to use "strcmp". This warning is enabled by -Wall. -Waddress-of-packed-member Warn when the address of packed member of struct or union is taken, which usually results in an unaligned pointer value. This is enabled by default. -Wlogical-op Warn about suspicious uses of logical operators in expressions. This includes using logical operators in contexts where a bit-wise operator is likely to be expected. Also warns when the operands of a logical operator are the same: extern int a; if (a < 0 && a < 0) { ... } -Wlogical-not-parentheses Warn about logical not used on the left hand side operand of a comparison. This option does not warn if the right operand is considered to be a boolean expression. Its purpose is to detect suspicious code like the following: int a; ... if (!a > 1) { ... } It is possible to suppress the warning by wrapping the LHS into parentheses: if ((!a) > 1) { ... } This warning is enabled by -Wall. -Waggregate-return Warn if any functions that return structures or unions are defined or called. (In languages where you can return an array, this also elicits a warning.) -Wno-aggressive-loop-optimizations Warn if in a loop with constant number of iterations the compiler detects undefined behavior in some statement during one or more of the iterations. -Wno-attributes Do not warn if an unexpected "__attribute__" is used, such as unrecognized attributes, function attributes applied to variables, etc. This does not stop errors for incorrect use of supported attributes. -Wno-builtin-declaration-mismatch Warn if a built-in function is declared with an incompatible signature or as a non-function, or when a built-in function declared with a type that does not include a prototype is called with arguments whose promoted types do not match those expected by the function. When -Wextra is specified, also warn when a built-in function that takes arguments is declared without a prototype. The -Wno-builtin-declaration-mismatch warning is enabled by default. To avoid the warning include the appropriate header to bring the prototypes of built-in functions into scope. For example, the call to "memset" below is diagnosed by the warning because the function expects a value of type "size_t" as its argument but the type of 32 is "int". With -Wextra, the declaration of the function is diagnosed as well. extern void* memset (); void f (void *d) { memset (d, '\0', 32); } -Wno-builtin-macro-redefined Do not warn if certain built-in macros are redefined. This suppresses warnings for redefinition of "__TIMESTAMP__", "__TIME__", "__DATE__", "__FILE__", and "__BASE_FILE__". -Wstrict-prototypes (C and Objective-C only) Warn if a function is declared or defined without specifying the argument types. (An old-style function definition is permitted without a warning if preceded by a declaration that specifies the argument types.) -Wold-style-declaration (C and Objective-C only) Warn for obsolescent usages, according to the C Standard, in a declaration. For example, warn if storage-class specifiers like "static" are not the first things in a declaration. This warning is also enabled by -Wextra. -Wold-style-definition (C and Objective-C only) Warn if an old-style function definition is used. A warning is given even if there is a previous prototype. -Wmissing-parameter-type (C and Objective-C only) A function parameter is declared without a type specifier in K&R-style functions: void foo(bar) { } This warning is also enabled by -Wextra. -Wmissing-prototypes (C and Objective-C only) Warn if a global function is defined without a previous prototype declaration. This warning is issued even if the definition itself provides a prototype. Use this option to detect global functions that do not have a matching prototype declaration in a header file. This option is not valid for C++ because all function declarations provide prototypes and a non-matching declaration declares an overload rather than conflict with an earlier declaration. Use -Wmissing-declarations to detect missing declarations in C++. -Wmissing-declarations Warn if a global function is defined without a previous declaration. Do so even if the definition itself provides a prototype. Use this option to detect global functions that are not declared in header files. In C, no warnings are issued for functions with previous non-prototype declarations; use -Wmissing-prototypes to detect missing prototypes. In C++, no warnings are issued for function templates, or for inline functions, or for functions in anonymous namespaces. -Wmissing-field-initializers Warn if a structure's initializer has some fields missing. For example, the following code causes such a warning, because "x.h" is implicitly zero: struct s { int f, g, h; }; struct s x = { 3, 4 }; This option does not warn about designated initializers, so the following modification does not trigger a warning: struct s { int f, g, h; }; struct s x = { .f = 3, .g = 4 }; In C this option does not warn about the universal zero initializer { 0 }: struct s { int f, g, h; }; struct s x = { 0 }; Likewise, in C++ this option does not warn about the empty { } initializer, for example: struct s { int f, g, h; }; s x = { }; This warning is included in -Wextra. To get other -Wextra warnings without this one, use -Wextra -Wno-missing-field-initializers. -Wno-multichar Do not warn if a multicharacter constant ('FOOF') is used. Usually they indicate a typo in the user's code, as they have implementation-defined values, and should not be used in portable code. -Wnormalized=[none|id|nfc|nfkc] In ISO C and ISO C++, two identifiers are different if they are different sequences of characters. However, sometimes when characters outside the basic ASCII character set are used, you can have two different character sequences that look the same. To avoid confusion, the ISO 10646 standard sets out some normalization rules which when applied ensure that two sequences that look the same are turned into the same sequence. GCC can warn you if you are using identifiers that have not been normalized; this option controls that warning. There are four levels of warning supported by GCC. The default is -Wnormalized=nfc, which warns about any identifier that is not in the ISO 10646 "C" normalized form, NFC. NFC is the recommended form for most uses. It is equivalent to -Wnormalized. Unfortunately, there are some characters allowed in identifiers by ISO C and ISO C++ that, when turned into NFC, are not allowed in identifiers. That is, there's no way to use these symbols in portable ISO C or C++ and have all your identifiers in NFC. -Wnormalized=id suppresses the warning for these characters. It is hoped that future versions of the standards involved will correct this, which is why this option is not the default. You can switch the warning off for all characters by writing -Wnormalized=none or -Wno-normalized. You should only do this if you are using some other normalization scheme (like "D"), because otherwise you can easily create bugs that are literally impossible to see. Some characters in ISO 10646 have distinct meanings but look identical in some fonts or display methodologies, especially once formatting has been applied. For instance "\u207F", "SUPERSCRIPT LATIN SMALL LETTER N", displays just like a regular "n" that has been placed in a superscript. ISO 10646 defines the NFKC normalization scheme to convert all these into a standard form as well, and GCC warns if your code is not in NFKC if you use -Wnormalized=nfkc. This warning is comparable to warning about every identifier that contains the letter O because it might be confused with the digit 0, and so is not the default, but may be useful as a local coding convention if the programming environment cannot be fixed to display these characters distinctly. -Wno-attribute-warning Do not warn about usage of functions declared with "warning" attribute. By default, this warning is enabled. -Wno-attribute-warning can be used to disable the warning or -Wno-error=attribute-warning can be used to disable the error when compiled with -Werror flag. -Wno-deprecated Do not warn about usage of deprecated features. -Wno-deprecated-declarations Do not warn about uses of functions, variables, and types marked as deprecated by using the "deprecated" attribute. -Wno-overflow Do not warn about compile-time overflow in constant expressions. -Wno-odr Warn about One Definition Rule violations during link-time optimization. Requires -flto-odr-type-merging to be enabled. Enabled by default. -Wopenmp-simd Warn if the vectorizer cost model overrides the OpenMP simd directive set by user. The -fsimd-cost-model=unlimited option can be used to relax the cost model. -Woverride-init (C and Objective-C only) Warn if an initialized field without side effects is overridden when using designated initializers. This warning is included in -Wextra. To get other -Wextra warnings without this one, use -Wextra -Wno-override-init. -Woverride-init-side-effects (C and Objective-C only) Warn if an initialized field with side effects is overridden when using designated initializers. This warning is enabled by default. -Wpacked Warn if a structure is given the packed attribute, but the packed attribute has no effect on the layout or size of the structure. Such structures may be mis-aligned for little benefit. For instance, in this code, the variable "f.x" in "struct bar" is misaligned even though "struct bar" does not itself have the packed attribute: struct foo { int x; char a, b, c, d; } __attribute__((packed)); struct bar { char z; struct foo f; }; -Wpacked-bitfield-compat The 4.1, 4.2 and 4.3 series of GCC ignore the "packed" attribute on bit-fields of type "char". This has been fixed in GCC 4.4 but the change can lead to differences in the structure layout. GCC informs you when the offset of such a field has changed in GCC 4.4. For example there is no longer a 4-bit padding between field "a" and "b" in this structure: struct foo { char a:4; char b:8; } __attribute__ ((packed)); This warning is enabled by default. Use -Wno-packed-bitfield-compat to disable this warning. -Wpacked-not-aligned (C, C++, Objective-C and Objective-C++ only) Warn if a structure field with explicitly specified alignment in a packed struct or union is misaligned. For example, a warning will be issued on "struct S", like, "warning: alignment 1 of 'struct S' is less than 8", in this code: struct __attribute__ ((aligned (8))) S8 { char a[8]; }; struct __attribute__ ((packed)) S { struct S8 s8; }; This warning is enabled by -Wall. -Wpadded Warn if padding is included in a structure, either to align an element of the structure or to align the whole structure. Sometimes when this happens it is possible to rearrange the fields of the structure to reduce the padding and so make the structure smaller. -Wredundant-decls Warn if anything is declared more than once in the same scope, even in cases where multiple declaration is valid and changes nothing. -Wno-restrict Warn when an object referenced by a "restrict"-qualified parameter (or, in C++, a "__restrict"-qualified parameter) is aliased by another argument, or when copies between such objects overlap. For example, the call to the "strcpy" function below attempts to truncate the string by replacing its initial characters with the last four. However, because the call writes the terminating NUL into "a[4]", the copies overlap and the call is diagnosed. void foo (void) { char a[] = "abcd1234"; strcpy (a, a + 4); ... } The -Wrestrict option detects some instances of simple overlap even without optimization but works best at -O2 and above. It is included in -Wall. -Wnested-externs (C and Objective-C only) Warn if an "extern" declaration is encountered within a function. -Wno-inherited-variadic-ctor Suppress warnings about use of C++11 inheriting constructors when the base class inherited from has a C variadic constructor; the warning is on by default because the ellipsis is not inherited. -Winline Warn if a function that is declared as inline cannot be inlined. Even with this option, the compiler does not warn about failures to inline functions declared in system headers. The compiler uses a variety of heuristics to determine whether or not to inline a function. For example, the compiler takes into account the size of the function being inlined and the amount of inlining that has already been done in the current function. Therefore, seemingly insignificant changes in the source program can cause the warnings produced by -Winline to appear or disappear. -Wno-invalid-offsetof (C++ and Objective-C++ only) Suppress warnings from applying the "offsetof" macro to a non-POD type. According to the 2014 ISO C++ standard, applying "offsetof" to a non-standard-layout type is undefined. In existing C++ implementations, however, "offsetof" typically gives meaningful results. This flag is for users who are aware that they are writing nonportable code and who have deliberately chosen to ignore the warning about it. The restrictions on "offsetof" may be relaxed in a future version of the C++ standard. -Wint-in-bool-context Warn for suspicious use of integer values where boolean values are expected, such as conditional expressions (?:) using non-boolean integer constants in boolean context, like "if (a <= b ? 2 : 3)". Or left shifting of signed integers in boolean context, like "for (a = 0; 1 << a; a++);". Likewise for all kinds of multiplications regardless of the data type. This warning is enabled by -Wall. -Wno-int-to-pointer-cast Suppress warnings from casts to pointer type of an integer of a different size. In C++, casting to a pointer type of smaller size is an error. Wint-to-pointer-cast is enabled by default. -Wno-pointer-to-int-cast (C and Objective-C only) Suppress warnings from casts from a pointer to an integer type of a different size. -Winvalid-pch Warn if a precompiled header is found in the search path but cannot be used. -Wlong-long Warn if "long long" type is used. This is enabled by either -Wpedantic or -Wtraditional in ISO C90 and C++98 modes. To inhibit the warning messages, use -Wno-long-long. -Wvariadic-macros Warn if variadic macros are used in ISO C90 mode, or if the GNU alternate syntax is used in ISO C99 mode. This is enabled by either -Wpedantic or -Wtraditional. To inhibit the warning messages, use -Wno-variadic-macros. -Wvarargs Warn upon questionable usage of the macros used to handle variable arguments like "va_start". This is default. To inhibit the warning messages, use -Wno-varargs. -Wvector-operation-performance Warn if vector operation is not implemented via SIMD capabilities of the architecture. Mainly useful for the performance tuning. Vector operation can be implemented "piecewise", which means that the scalar operation is performed on every vector element; "in parallel", which means that the vector operation is implemented using scalars of wider type, which normally is more performance efficient; and "as a single scalar", which means that vector fits into a scalar type. -Wno-virtual-move-assign Suppress warnings about inheriting from a virtual base with a non-trivial C++11 move assignment operator. This is dangerous because if the virtual base is reachable along more than one path, it is moved multiple times, which can mean both objects end up in the moved-from state. If the move assignment operator is written to avoid moving from a moved- from object, this warning can be disabled. -Wvla Warn if a variable-length array is used in the code. -Wno-vla prevents the -Wpedantic warning of the variable- length array. -Wvla-larger-than=byte-size If this option is used, the compiler will warn for declarations of variable-length arrays whose size is either unbounded, or bounded by an argument that allows the array size to exceed byte-size bytes. This is similar to how -Walloca-larger-than=byte-size works, but with variable- length arrays. Note that GCC may optimize small variable-length arrays of a known value into plain arrays, so this warning may not get triggered for such arrays. -Wvla-larger-than=PTRDIFF_MAX is enabled by default but is typically only effective when -ftree-vrp is active (default for -O2 and above). See also -Walloca-larger-than=byte-size. -Wno-vla-larger-than Disable -Wvla-larger-than= warnings. The option is equivalent to -Wvla-larger-than=SIZE_MAX or larger. -Wvolatile-register-var Warn if a register variable is declared volatile. The volatile modifier does not inhibit all optimizations that may eliminate reads and/or writes to register variables. This warning is enabled by -Wall. -Wdisabled-optimization Warn if a requested optimization pass is disabled. This warning does not generally indicate that there is anything wrong with your code; it merely indicates that GCC's optimizers are unable to handle the code effectively. Often, the problem is that your code is too big or too complex; GCC refuses to optimize programs when the optimization itself is likely to take inordinate amounts of time. -Wpointer-sign (C and Objective-C only) Warn for pointer argument passing or assignment with different signedness. This option is only supported for C and Objective-C. It is implied by -Wall and by -Wpedantic, which can be disabled with -Wno-pointer-sign. -Wstack-protector This option is only active when -fstack-protector is active. It warns about functions that are not protected against stack smashing. -Woverlength-strings Warn about string constants that are longer than the "minimum maximum" length specified in the C standard. Modern compilers generally allow string constants that are much longer than the standard's minimum limit, but very portable programs should avoid using longer strings. The limit applies after string constant concatenation, and does not count the trailing NUL. In C90, the limit was 509 characters; in C99, it was raised to 4095. C++98 does not specify a normative minimum maximum, so we do not diagnose overlength strings in C++. This option is implied by -Wpedantic, and can be disabled with -Wno-overlength-strings. -Wunsuffixed-float-constants (C and Objective-C only) Issue a warning for any floating constant that does not have a suffix. When used together with -Wsystem-headers it warns about such constants in system header files. This can be useful when preparing code to use with the "FLOAT_CONST_DECIMAL64" pragma from the decimal floating- point extension to C99. -Wno-designated-init (C and Objective-C only) Suppress warnings when a positional initializer is used to initialize a structure that has been marked with the "designated_init" attribute. -Whsa Issue a warning when HSAIL cannot be emitted for the compiled function or OpenMP construct. Options for Debugging Your Program To tell GCC to emit extra information for use by a debugger, in almost all cases you need only to add -g to your other options. GCC allows you to use -g with -O. The shortcuts taken by optimized code may occasionally be surprising: some variables you declared may not exist at all; flow of control may briefly move where you did not expect it; some statements may not be executed because they compute constant results or their values are already at hand; some statements may execute in different places because they have been moved out of loops. Nevertheless it is possible to debug optimized output. This makes it reasonable to use the optimizer for programs that might have bugs. If you are not using some other optimization option, consider using -Og with -g. With no -O option at all, some compiler passes that collect information useful for debugging do not run at all, so that -Og may result in a better debugging experience. -g Produce debugging information in the operating system's native format (stabs, COFF, XCOFF, or DWARF). GDB can work with this debugging information. On most systems that use stabs format, -g enables use of extra debugging information that only GDB can use; this extra information makes debugging work better in GDB but probably makes other debuggers crash or refuse to read the program. If you want to control for certain whether to generate the extra information, use -gstabs+, -gstabs, -gxcoff+, -gxcoff, or -gvms (see below). -ggdb Produce debugging information for use by GDB. This means to use the most expressive format available (DWARF, stabs, or the native format if neither of those are supported), including GDB extensions if at all possible. -gdwarf -gdwarf-version Produce debugging information in DWARF format (if that is supported). The value of version may be either 2, 3, 4 or 5; the default version for most targets is 4. DWARF Version 5 is only experimental. Note that with DWARF Version 2, some ports require and always use some non-conflicting DWARF 3 extensions in the unwind tables. Version 4 may require GDB 7.0 and -fvar-tracking-assignments for maximum benefit. GCC no longer supports DWARF Version 1, which is substantially different than Version 2 and later. For historical reasons, some other DWARF-related options such as -fno-dwarf2-cfi-asm) retain a reference to DWARF Version 2 in their names, but apply to all currently-supported versions of DWARF. -gstabs Produce debugging information in stabs format (if that is supported), without GDB extensions. This is the format used by DBX on most BSD systems. On MIPS, Alpha and System V Release 4 systems this option produces stabs debugging output that is not understood by DBX. On System V Release 4 systems this option requires the GNU assembler. -gstabs+ Produce debugging information in stabs format (if that is supported), using GNU extensions understood only by the GNU debugger (GDB). The use of these extensions is likely to make other debuggers crash or refuse to read the program. -gxcoff Produce debugging information in XCOFF format (if that is supported). This is the format used by the DBX debugger on IBM RS/6000 systems. -gxcoff+ Produce debugging information in XCOFF format (if that is supported), using GNU extensions understood only by the GNU debugger (GDB). The use of these extensions is likely to make other debuggers crash or refuse to read the program, and may cause assemblers other than the GNU assembler (GAS) to fail with an error. -gvms Produce debugging information in Alpha/VMS debug format (if that is supported). This is the format used by DEBUG on Alpha/VMS systems. -glevel -ggdblevel -gstabslevel -gxcofflevel -gvmslevel Request debugging information and also use level to specify how much information. The default level is 2. Level 0 produces no debug information at all. Thus, -g0 negates -g. Level 1 produces minimal information, enough for making backtraces in parts of the program that you don't plan to debug. This includes descriptions of functions and external variables, and line number tables, but no information about local variables. Level 3 includes extra information, such as all the macro definitions present in the program. Some debuggers support macro expansion when you use -g3. If you use multiple -g options, with or without level numbers, the last such option is the one that is effective. -gdwarf does not accept a concatenated debug level, to avoid confusion with -gdwarf-level. Instead use an additional -glevel option to change the debug level for DWARF. -feliminate-unused-debug-symbols Produce debugging information in stabs format (if that is supported), for only symbols that are actually used. -femit-class-debug-always Instead of emitting debugging information for a C++ class in only one object file, emit it in all object files using the class. This option should be used only with debuggers that are unable to handle the way GCC normally emits debugging information for classes because using this option increases the size of debugging information by as much as a factor of two. -fno-merge-debug-strings Direct the linker to not merge together strings in the debugging information that are identical in different object files. Merging is not supported by all assemblers or linkers. Merging decreases the size of the debug information in the output file at the cost of increasing link processing time. Merging is enabled by default. -fdebug-prefix-map=old=new When compiling files residing in directory old, record debugging information describing them as if the files resided in directory new instead. This can be used to replace a build-time path with an install-time path in the debug info. It can also be used to change an absolute path to a relative path by using . for new. This can give more reproducible builds, which are location independent, but may require an extra command to tell GDB where to find the source files. See also -ffile-prefix-map. -fvar-tracking Run variable tracking pass. It computes where variables are stored at each position in code. Better debugging information is then generated (if the debugging information format supports this information). It is enabled by default when compiling with optimization (-Os, -O, -O2, ...), debugging information (-g) and the debug info format supports it. -fvar-tracking-assignments Annotate assignments to user variables early in the compilation and attempt to carry the annotations over throughout the compilation all the way to the end, in an attempt to improve debug information while optimizing. Use of -gdwarf-4 is recommended along with it. It can be enabled even if var-tracking is disabled, in which case annotations are created and maintained, but discarded at the end. By default, this flag is enabled together with -fvar-tracking, except when selective scheduling is enabled. -gsplit-dwarf Separate as much DWARF debugging information as possible into a separate output file with the extension .dwo. This option allows the build system to avoid linking files with debug information. To be useful, this option requires a debugger capable of reading .dwo files. -gdescribe-dies Add description attributes to some DWARF DIEs that have no name attribute, such as artificial variables, external references and call site parameter DIEs. -gpubnames Generate DWARF ".debug_pubnames" and ".debug_pubtypes" sections. -ggnu-pubnames Generate ".debug_pubnames" and ".debug_pubtypes" sections in a format suitable for conversion into a GDB index. This option is only useful with a linker that can produce GDB index version 7. -fdebug-types-section When using DWARF Version 4 or higher, type DIEs can be put into their own ".debug_types" section instead of making them part of the ".debug_info" section. It is more efficient to put them in a separate comdat section since the linker can then remove duplicates. But not all DWARF consumers support ".debug_types" sections yet and on some objects ".debug_types" produces larger instead of smaller debugging information. -grecord-gcc-switches -gno-record-gcc-switches This switch causes the command-line options used to invoke the compiler that may affect code generation to be appended to the DW_AT_producer attribute in DWARF debugging information. The options are concatenated with spaces separating them from each other and from the compiler version. It is enabled by default. See also -frecord-gcc-switches for another way of storing compiler options into the object file. -gstrict-dwarf Disallow using extensions of later DWARF standard version than selected with -gdwarf-version. On most targets using non-conflicting DWARF extensions from later standard versions is allowed. -gno-strict-dwarf Allow using extensions of later DWARF standard version than selected with -gdwarf-version. -gas-loc-support Inform the compiler that the assembler supports ".loc" directives. It may then use them for the assembler to generate DWARF2+ line number tables. This is generally desirable, because assembler-generated line-number tables are a lot more compact than those the compiler can generate itself. This option will be enabled by default if, at GCC configure time, the assembler was found to support such directives. -gno-as-loc-support Force GCC to generate DWARF2+ line number tables internally, if DWARF2+ line number tables are to be generated. gas-locview-support Inform the compiler that the assembler supports "view" assignment and reset assertion checking in ".loc" directives. This option will be enabled by default if, at GCC configure time, the assembler was found to support them. gno-as-locview-support Force GCC to assign view numbers internally, if -gvariable-location-views are explicitly requested. -gcolumn-info -gno-column-info Emit location column information into DWARF debugging information, rather than just file and line. This option is enabled by default. -gstatement-frontiers -gno-statement-frontiers This option causes GCC to create markers in the internal representation at the beginning of statements, and to keep them roughly in place throughout compilation, using them to guide the output of "is_stmt" markers in the line number table. This is enabled by default when compiling with optimization (-Os, -O, -O2, ...), and outputting DWARF 2 debug information at the normal level. -gvariable-location-views -gvariable-location-views=incompat5 -gno-variable-location-views Augment variable location lists with progressive view numbers implied from the line number table. This enables debug information consumers to inspect state at certain points of the program, even if no instructions associated with the corresponding source locations are present at that point. If the assembler lacks support for view numbers in line number tables, this will cause the compiler to emit the line number table, which generally makes them somewhat less compact. The augmented line number tables and location lists are fully backward-compatible, so they can be consumed by debug information consumers that are not aware of these augmentations, but they won't derive any benefit from them either. This is enabled by default when outputting DWARF 2 debug information at the normal level, as long as there is assembler support, -fvar-tracking-assignments is enabled and -gstrict-dwarf is not. When assembler support is not available, this may still be enabled, but it will force GCC to output internal line number tables, and if -ginternal-reset-location-views is not enabled, that will most certainly lead to silently mismatching location views. There is a proposed representation for view numbers that is not backward compatible with the location list format introduced in DWARF 5, that can be enabled with -gvariable-location-views=incompat5. This option may be removed in the future, is only provided as a reference implementation of the proposed representation. Debug information consumers are not expected to support this extended format, and they would be rendered unable to decode location lists using it. -ginternal-reset-location-views -gno-internal-reset-location-views Attempt to determine location views that can be omitted from location view lists. This requires the compiler to have very accurate insn length estimates, which isn't always the case, and it may cause incorrect view lists to be generated silently when using an assembler that does not support location view lists. The GNU assembler will flag any such error as a "view number mismatch". This is only enabled on ports that define a reliable estimation function. -ginline-points -gno-inline-points Generate extended debug information for inlined functions. Location view tracking markers are inserted at inlined entry points, so that address and view numbers can be computed and output in debug information. This can be enabled independently of location views, in which case the view numbers won't be output, but it can only be enabled along with statement frontiers, and it is only enabled by default if location views are enabled. -gz[=type] Produce compressed debug sections in DWARF format, if that is supported. If type is not given, the default type depends on the capabilities of the assembler and linker used. type may be one of none (don't compress debug sections), zlib (use zlib compression in ELF gABI format), or zlib-gnu (use zlib compression in traditional GNU format). If the linker doesn't support writing compressed debug sections, the option is rejected. Otherwise, if the assembler does not support them, -gz is silently ignored when producing object files. -femit-struct-debug-baseonly Emit debug information for struct-like types only when the base name of the compilation source file matches the base name of file in which the struct is defined. This option substantially reduces the size of debugging information, but at significant potential loss in type information to the debugger. See -femit-struct-debug-reduced for a less aggressive option. See -femit-struct-debug-detailed for more detailed control. This option works only with DWARF debug output. -femit-struct-debug-reduced Emit debug information for struct-like types only when the base name of the compilation source file matches the base name of file in which the type is defined, unless the struct is a template or defined in a system header. This option significantly reduces the size of debugging information, with some potential loss in type information to the debugger. See -femit-struct-debug-baseonly for a more aggressive option. See -femit-struct-debug-detailed for more detailed control. This option works only with DWARF debug output. -femit-struct-debug-detailed[=spec-list] Specify the struct-like types for which the compiler generates debug information. The intent is to reduce duplicate struct debug information between different object files within the same program. This option is a detailed version of -femit-struct-debug-reduced and -femit-struct-debug-baseonly, which serves for most needs. A specification has the syntax[dir:|ind:][ord:|gen:](any|sys|base|none) The optional first word limits the specification to structs that are used directly (dir:) or used indirectly (ind:). A struct type is used directly when it is the type of a variable, member. Indirect uses arise through pointers to structs. That is, when use of an incomplete struct is valid, the use is indirect. An example is struct one direct; struct two * indirect;. The optional second word limits the specification to ordinary structs (ord:) or generic structs (gen:). Generic structs are a bit complicated to explain. For C++, these are non- explicit specializations of template classes, or non-template classes within the above. Other programming languages have generics, but -femit-struct-debug-detailed does not yet implement them. The third word specifies the source files for those structs for which the compiler should emit debug information. The values none and any have the normal meaning. The value base means that the base of name of the file in which the type declaration appears must match the base of the name of the main compilation file. In practice, this means that when compiling foo.c, debug information is generated for types declared in that file and foo.h, but not other header files. The value sys means those types satisfying base or declared in system or compiler headers. You may need to experiment to determine the best settings for your application. The default is -femit-struct-debug-detailed=all. This option works only with DWARF debug output. -fno-dwarf2-cfi-asm Emit DWARF unwind info as compiler generated ".eh_frame" section instead of using GAS ".cfi_*" directives. -fno-eliminate-unused-debug-types Normally, when producing DWARF output, GCC avoids producing debug symbol output for types that are nowhere used in the source file being compiled. Sometimes it is useful to have GCC emit debugging information for all types declared in a compilation unit, regardless of whether or not they are actually used in that compilation unit, for example if, in the debugger, you want to cast a value to a type that is not actually used in your program (but is declared). More often, however, this results in a significant amount of wasted space. Options That Control Optimization These options control various sorts of optimizations. Without any optimization option, the compiler's goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you expect from the source code. Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program. The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them. Not all optimizations are controlled directly by a flag. Only optimizations that have a flag are listed in this section. Most optimizations are completely disabled at -O0 or if an -O level is not set on the command line, even if individual optimization flags are specified. Similarly, -Og suppresses many optimization passes. Depending on the target and how GCC was configured, a slightly different set of optimizations may be enabled at each -O level than those listed here. You can invoke GCC with -Q --help=optimizers to find out the exact set of optimizations that are enabled at each level. -O -O1 Optimize. Optimizing compilation takes somewhat more time, and a lot more memory for a large function. With -O, the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time. -O turns on the following optimization flags: -fauto-inc-dec -fbranch-count-reg -fcombine-stack-adjustments -fcompare-elim -fcprop-registers -fdce -fdefer-pop -fdelayed-branch -fdse -fforward-propagate -fguess-branch-probability -fif-conversion -fif-conversion2 -finline-functions-called-once -fipa-profile -fipa-pure-const -fipa-reference -fipa-reference-addressable -fmerge-constants -fmove-loop-invariants -fomit-frame-pointer -freorder-blocks -fshrink-wrap -fshrink-wrap-separate -fsplit-wide-types -fssa-backprop -fssa-phiopt -ftree-bit-ccp -ftree-ccp -ftree-ch -ftree-coalesce-vars -ftree-copy-prop -ftree-dce -ftree-dominator-opts -ftree-dse -ftree-forwprop -ftree-fre -ftree-phiprop -ftree-pta -ftree-scev-cprop -ftree-sink -ftree-slsr -ftree-sra -ftree-ter -funit-at-a-time -O2 Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. As compared to -O, this option increases both compilation time and the performance of the generated code. -O2 turns on all optimization flags specified by -O. It also turns on the following optimization flags: -falign-functions -falign-jumps -falign-labels -falign-loops -fcaller-saves -fcode-hoisting -fcrossjumping -fcse-follow-jumps -fcse-skip-blocks -fdelete-null-pointer-checks -fdevirtualize -fdevirtualize-speculatively -fexpensive-optimizations -fgcse -fgcse-lm -fhoist-adjacent-loads -finline-small-functions -findirect-inlining -fipa-bit-cp -fipa-cp -fipa-icf -fipa-ra -fipa-sra -fipa-vrp -fisolate-erroneous-paths-dereference -flra-remat -foptimize-sibling-calls -foptimize-strlen -fpartial-inlining -fpeephole2 -freorder-blocks-algorithm=stc -freorder-blocks-and-partition -freorder-functions -frerun-cse-after-loop -fschedule-insns -fschedule-insns2 -fsched-interblock -fsched-spec -fstore-merging -fstrict-aliasing -fthread-jumps -ftree-builtin-call-dce -ftree-pre -ftree-switch-conversion -ftree-tail-merge -ftree-vrp Please note the warning under -fgcse about invoking -O2 on programs that use computed gotos. -O3 Optimize yet more. -O3 turns on all optimizations specified by -O2 and also turns on the following optimization flags: -fgcse-after-reload -finline-functions -fipa-cp-clone -floop-interchange -floop-unroll-and-jam -fpeel-loops -fpredictive-commoning -fsplit-paths -ftree-loop-distribute-patterns -ftree-loop-distribution -ftree-loop-vectorize -ftree-partial-pre -ftree-slp-vectorize -funswitch-loops -fvect-cost-model -fversion-loops-for-strides -O0 Reduce compilation time and make debugging produce the expected results. This is the default. -Os Optimize for size. -Os enables all -O2 optimizations except those that often increase code size: -falign-functions -falign-jumps -falign-labels -falign-loops -fprefetch-loop-arrays -freorder-blocks-algorithm=stc It also enables -finline-functions, causes the compiler to tune for code size rather than execution speed, and performs further optimizations designed to reduce code size. -Ofast Disregard strict standards compliance. -Ofast enables all -O3 optimizations. It also enables optimizations that are not valid for all standard-compliant programs. It turns on -ffast-math and the Fortran-specific -fstack-arrays, unless -fmax-stack-var-size is specified, and -fno-protect-parens. -Og Optimize debugging experience. -Og should be the optimization level of choice for the standard edit-compile- debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience. It is a better choice than -O0 for producing debuggable code because some compiler passes that collect debug information are disabled at -O0. Like -O0, -Og completely disables a number of optimization passes so that individual options controlling them have no effect. Otherwise -Og enables all -O1 optimization flags except for those that may interfere with debugging: -fbranch-count-reg -fdelayed-branch -fif-conversion -fif-conversion2 -finline-functions-called-once -fmove-loop-invariants -fssa-phiopt -ftree-bit-ccp -ftree-pta -ftree-sra If you use multiple -O options, with or without level numbers, the last such option is the one that is effective. Options of the form -fflag specify machine-independent flags. Most flags have both positive and negative forms; the negative form of -ffoo is -fno-foo. In the table below, only one of the forms is listed---the one you typically use. You can figure out the other form by either removing no- or adding it. The following options control specific optimizations. They are either activated by -O options or are related to ones that are. You can use the following flags in the rare cases when "fine- tuning" of optimizations to be performed is desired. -fno-defer-pop For machines that must pop arguments after a function call, always pop the arguments as soon as each function returns. At levels -O1 and higher, -fdefer-pop is the default; this allows the compiler to let arguments accumulate on the stack for several function calls and pop them all at once. -fforward-propagate Perform a forward propagation pass on RTL. The pass tries to combine two instructions and checks if the result can be simplified. If loop unrolling is active, two passes are performed and the second is scheduled after loop unrolling. This option is enabled by default at optimization levels -O, -O2, -O3, -Os. -ffp-contract=style -ffp-contract=off disables floating-point expression contraction. -ffp-contract=fast enables floating-point expression contraction such as forming of fused multiply-add operations if the target has native support for them. -ffp-contract=on enables floating-point expression contraction if allowed by the language standard. This is currently not implemented and treated equal to -ffp-contract=off. The default is -ffp-contract=fast. -fomit-frame-pointer Omit the frame pointer in functions that don't need one. This avoids the instructions to save, set up and restore the frame pointer; on many targets it also makes an extra register available. On some targets this flag has no effect because the standard calling sequence always uses a frame pointer, so it cannot be omitted. Note that -fno-omit-frame-pointer doesn't guarantee the frame pointer is used in all functions. Several targets always omit the frame pointer in leaf functions. Enabled by default at -O and higher. -foptimize-sibling-calls Optimize sibling and tail recursive calls. Enabled at levels -O2, -O3, -Os. -foptimize-strlen Optimize various standard C string functions (e.g. "strlen", "strchr" or "strcpy") and their "_FORTIFY_SOURCE" counterparts into faster alternatives. Enabled at levels -O2, -O3. -fno-inline Do not expand any functions inline apart from those marked with the "always_inline" attribute. This is the default when not optimizing. Single functions can be exempted from inlining by marking them with the "noinline" attribute. -finline-small-functions Integrate functions into their callers when their body is smaller than expected function call code (so overall size of program gets smaller). The compiler heuristically decides which functions are simple enough to be worth integrating in this way. This inlining applies to all functions, even those not declared inline. Enabled at levels -O2, -O3, -Os. -findirect-inlining Inline also indirect calls that are discovered to be known at compile time thanks to previous inlining. This option has any effect only when inlining itself is turned on by the -finline-functions or -finline-small-functions options. Enabled at levels -O2, -O3, -Os. -finline-functions Consider all functions for inlining, even if they are not declared inline. The compiler heuristically decides which functions are worth integrating in this way. If all calls to a given function are integrated, and the function is declared "static", then the function is normally not output as assembler code in its own right. Enabled at levels -O3, -Os. Also enabled by -fprofile-use and -fauto-profile. -finline-functions-called-once Consider all "static" functions called once for inlining into their caller even if they are not marked "inline". If a call to a given function is integrated, then the function is not output as assembler code in its own right. Enabled at levels -O1, -O2, -O3 and -Os, but not -Og. -fearly-inlining Inline functions marked by "always_inline" and functions whose body seems smaller than the function call overhead early before doing -fprofile-generate instrumentation and real inlining pass. Doing so makes profiling significantly cheaper and usually inlining faster on programs having large chains of nested wrapper functions. Enabled by default. -fipa-sra Perform interprocedural scalar replacement of aggregates, removal of unused parameters and replacement of parameters passed by reference by parameters passed by value. Enabled at levels -O2, -O3 and -Os. -finline-limit=n By default, GCC limits the size of functions that can be inlined. This flag allows coarse control of this limit. n is the size of functions that can be inlined in number of pseudo instructions. Inlining is actually controlled by a number of parameters, which may be specified individually by using --param name=value. The -finline-limit=n option sets some of these parameters as follows: max-inline-insns-single is set to n/2. max-inline-insns-auto is set to n/2. See below for a documentation of the individual parameters controlling inlining and for the defaults of these parameters. Note: there may be no value to -finline-limit that results in default behavior. Note: pseudo instruction represents, in this particular context, an abstract measurement of function's size. In no way does it represent a count of assembly instructions and as such its exact meaning might change from one release to an another. -fno-keep-inline-dllexport This is a more fine-grained version of -fkeep-inline-functions, which applies only to functions that are declared using the "dllexport" attribute or declspec. -fkeep-inline-functions In C, emit "static" functions that are declared "inline" into the object file, even if the function has been inlined into all of its callers. This switch does not affect functions using the "extern inline" extension in GNU C90. In C++, emit any and all inline functions into the object file. -fkeep-static-functions Emit "static" functions into the object file, even if the function is never used. -fkeep-static-consts Emit variables declared "static const" when optimization isn't turned on, even if the variables aren't referenced. GCC enables this option by default. If you want to force the compiler to check if a variable is referenced, regardless of whether or not optimization is turned on, use the -fno-keep-static-consts option. -fmerge-constants Attempt to merge identical constants (string constants and floating-point constants) across compilation units. This option is the default for optimized compilation if the assembler and linker support it. Use -fno-merge-constants to inhibit this behavior. Enabled at levels -O, -O2, -O3, -Os. -fmerge-all-constants Attempt to merge identical constants and identical variables. This option implies -fmerge-constants. In addition to -fmerge-constants this considers e.g. even constant initialized arrays or initialized constant variables with integral or floating-point types. Languages like C or C++ require each variable, including multiple instances of the same variable in recursive calls, to have distinct locations, so using this option results in non-conforming behavior. -fmodulo-sched Perform swing modulo scheduling immediately before the first scheduling pass. This pass looks at innermost loops and reorders their instructions by overlapping different iterations. -fmodulo-sched-allow-regmoves Perform more aggressive SMS-based modulo scheduling with register moves allowed. By setting this flag certain anti- dependences edges are deleted, which triggers the generation of reg-moves based on the life-range analysis. This option is effective only with -fmodulo-sched enabled. -fno-branch-count-reg Disable the optimization pass that scans for opportunities to use "decrement and branch" instructions on a count register instead of instruction sequences that decrement a register, compare it against zero, and then branch based upon the result. This option is only meaningful on architectures that support such instructions, which include x86, PowerPC, IA-64 and S/390. Note that the -fno-branch-count-reg option doesn't remove the decrement and branch instructions from the generated instruction stream introduced by other optimization passes. The default is -fbranch-count-reg at -O1 and higher, except for -Og. -fno-function-cse Do not put function addresses in registers; make each instruction that calls a constant function contain the function's address explicitly. This option results in less efficient code, but some strange hacks that alter the assembler output may be confused by the optimizations performed when this option is not used. The default is -ffunction-cse -fno-zero-initialized-in-bss If the target supports a BSS section, GCC by default puts variables that are initialized to zero into BSS. This can save space in the resulting code. This option turns off this behavior because some programs explicitly rely on variables going to the data section---e.g., so that the resulting executable can find the beginning of that section and/or make assumptions based on that. The default is -fzero-initialized-in-bss. -fthread-jumps Perform optimizations that check to see if a jump branches to a location where another comparison subsumed by the first is found. If so, the first branch is redirected to either the destination of the second branch or a point immediately following it, depending on whether the condition is known to be true or false. Enabled at levels -O2, -O3, -Os. -fsplit-wide-types When using a type that occupies multiple registers, such as "long long" on a 32-bit system, split the registers apart and allocate them independently. This normally generates better code for those types, but may make debugging more difficult. Enabled at levels -O, -O2, -O3, -Os. -fcse-follow-jumps In common subexpression elimination (CSE), scan through jump instructions when the target of the jump is not reached by any other path. For example, when CSE encounters an "if" statement with an "else" clause, CSE follows the jump when the condition tested is false. Enabled at levels -O2, -O3, -Os. -fcse-skip-blocks This is similar to -fcse-follow-jumps, but causes CSE to follow jumps that conditionally skip over blocks. When CSE encounters a simple "if" statement with no else clause, -fcse-skip-blocks causes CSE to follow the jump around the body of the "if". Enabled at levels -O2, -O3, -Os. -frerun-cse-after-loop Re-run common subexpression elimination after loop optimizations are performed. Enabled at levels -O2, -O3, -Os. -fgcse Perform a global common subexpression elimination pass. This pass also performs global constant and copy propagation. Note: When compiling a program using computed gotos, a GCC extension, you may get better run-time performance if you disable the global common subexpression elimination pass by adding -fno-gcse to the command line. Enabled at levels -O2, -O3, -Os. -fgcse-lm When -fgcse-lm is enabled, global common subexpression elimination attempts to move loads that are only killed by stores into themselves. This allows a loop containing a load/store sequence to be changed to a load outside the loop, and a copy/store within the loop. Enabled by default when -fgcse is enabled. -fgcse-sm When -fgcse-sm is enabled, a store motion pass is run after global common subexpression elimination. This pass attempts to move stores out of loops. When used in conjunction with -fgcse-lm, loops containing a load/store sequence can be changed to a load before the loop and a store after the loop. Not enabled at any optimization level. -fgcse-las When -fgcse-las is enabled, the global common subexpression elimination pass eliminates redundant loads that come after stores to the same memory location (both partial and full redundancies). Not enabled at any optimization level. -fgcse-after-reload When -fgcse-after-reload is enabled, a redundant load elimination pass is performed after reload. The purpose of this pass is to clean up redundant spilling. Enabled by -fprofile-use and -fauto-profile. -faggressive-loop-optimizations This option tells the loop optimizer to use language constraints to derive bounds for the number of iterations of a loop. This assumes that loop code does not invoke undefined behavior by for example causing signed integer overflows or out-of-bound array accesses. The bounds for the number of iterations of a loop are used to guide loop unrolling and peeling and loop exit test optimizations. This option is enabled by default. -funconstrained-commons This option tells the compiler that variables declared in common blocks (e.g. Fortran) may later be overridden with longer trailing arrays. This prevents certain optimizations that depend on knowing the array bounds. -fcrossjumping Perform cross-jumping transformation. This transformation unifies equivalent code and saves code size. The resulting code may or may not perform better than without cross- jumping. Enabled at levels -O2, -O3, -Os. -fauto-inc-dec Combine increments or decrements of addresses with memory accesses. This pass is always skipped on architectures that do not have instructions to support this. Enabled by default at -O and higher on architectures that support this. -fdce Perform dead code elimination (DCE) on RTL. Enabled by default at -O and higher. -fdse Perform dead store elimination (DSE) on RTL. Enabled by default at -O and higher. -fif-conversion Attempt to transform conditional jumps into branch-less equivalents. This includes use of conditional moves, min, max, set flags and abs instructions, and some tricks doable by standard arithmetics. The use of conditional execution on chips where it is available is controlled by -fif-conversion2. Enabled at levels -O, -O2, -O3, -Os, but not with -Og. -fif-conversion2 Use conditional execution (where available) to transform conditional jumps into branch-less equivalents. Enabled at levels -O, -O2, -O3, -Os, but not with -Og. -fdeclone-ctor-dtor The C++ ABI requires multiple entry points for constructors and destructors: one for a base subobject, one for a complete object, and one for a virtual destructor that calls operator delete afterwards. For a hierarchy with virtual bases, the base and complete variants are clones, which means two copies of the function. With this option, the base and complete variants are changed to be thunks that call a common implementation. Enabled by -Os. -fdelete-null-pointer-checks Assume that programs cannot safely dereference null pointers, and that no code or data element resides at address zero. This option enables simple constant folding optimizations at all optimization levels. In addition, other optimization passes in GCC use this flag to control global dataflow analyses that eliminate useless checks for null pointers; these assume that a memory access to address zero always results in a trap, so that if a pointer is checked after it has already been dereferenced, it cannot be null. Note however that in some environments this assumption is not true. Use -fno-delete-null-pointer-checks to disable this optimization for programs that depend on that behavior. This option is enabled by default on most targets. On Nios II ELF, it defaults to off. On AVR, CR16, and MSP430, this option is completely disabled. Passes that use the dataflow information are enabled independently at different optimization levels. -fdevirtualize Attempt to convert calls to virtual functions to direct calls. This is done both within a procedure and interprocedurally as part of indirect inlining (-findirect-inlining) and interprocedural constant propagation (-fipa-cp). Enabled at levels -O2, -O3, -Os. -fdevirtualize-speculatively Attempt to convert calls to virtual functions to speculative direct calls. Based on the analysis of the type inheritance graph, determine for a given call the set of likely targets. If the set is small, preferably of size 1, change the call into a conditional deciding between direct and indirect calls. The speculative calls enable more optimizations, such as inlining. When they seem useless after further optimization, they are converted back into original form. -fdevirtualize-at-ltrans Stream extra information needed for aggressive devirtualization when running the link-time optimizer in local transformation mode. This option enables more devirtualization but significantly increases the size of streamed data. For this reason it is disabled by default. -fexpensive-optimizations Perform a number of minor optimizations that are relatively expensive. Enabled at levels -O2, -O3, -Os. -free Attempt to remove redundant extension instructions. This is especially helpful for the x86-64 architecture, which implicitly zero-extends in 64-bit registers after writing to their lower 32-bit half. Enabled for Alpha, AArch64 and x86 at levels -O2, -O3, -Os. -fno-lifetime-dse In C++ the value of an object is only affected by changes within its lifetime: when the constructor begins, the object has an indeterminate value, and any changes during the lifetime of the object are dead when the object is destroyed. Normally dead store elimination will take advantage of this; if your code relies on the value of the object storage persisting beyond the lifetime of the object, you can use this flag to disable this optimization. To preserve stores before the constructor starts (e.g. because your operator new clears the object storage) but still treat the object as dead after the destructor you, can use -flifetime-dse=1. The default behavior can be explicitly selected with -flifetime-dse=2. -flifetime-dse=0 is equivalent to -fno-lifetime-dse. -flive-range-shrinkage Attempt to decrease register pressure through register live range shrinkage. This is helpful for fast processors with small or moderate size register sets. -fira-algorithm=algorithm Use the specified coloring algorithm for the integrated register allocator. The algorithm argument can be priority, which specifies Chow's priority coloring, or CB, which specifies Chaitin-Briggs coloring. Chaitin-Briggs coloring is not implemented for all architectures, but for those targets that do support it, it is the default because it generates better code. -fira-region=region Use specified regions for the integrated register allocator. The region argument should be one of the following: all Use all loops as register allocation regions. This can give the best results for machines with a small and/or irregular register set. mixed Use all loops except for loops with small register pressure as the regions. This value usually gives the best results in most cases and for most architectures, and is enabled by default when compiling with optimization for speed (-O, -O2, ...). one Use all functions as a single region. This typically results in the smallest code size, and is enabled by default for -Os or -O0. -fira-hoist-pressure Use IRA to evaluate register pressure in the code hoisting pass for decisions to hoist expressions. This option usually results in smaller code, but it can slow the compiler down. This option is enabled at level -Os for all targets. -fira-loop-pressure Use IRA to evaluate register pressure in loops for decisions to move loop invariants. This option usually results in generation of faster and smaller code on machines with large register files (>= 32 registers), but it can slow the compiler down. This option is enabled at level -O3 for some targets. -fno-ira-share-save-slots Disable sharing of stack slots used for saving call-used hard registers living through a call. Each hard register gets a separate stack slot, and as a result function stack frames are larger. -fno-ira-share-spill-slots Disable sharing of stack slots allocated for pseudo- registers. Each pseudo-register that does not get a hard register gets a separate stack slot, and as a result function stack frames are larger. -flra-remat Enable CFG-sensitive rematerialization in LRA. Instead of loading values of spilled pseudos, LRA tries to rematerialize (recalculate) values if it is profitable. Enabled at levels -O2, -O3, -Os. -fdelayed-branch If supported for the target machine, attempt to reorder instructions to exploit instruction slots available after delayed branch instructions. Enabled at levels -O, -O2, -O3, -Os, but not at -Og. -fschedule-insns If supported for the target machine, attempt to reorder instructions to eliminate execution stalls due to required data being unavailable. This helps machines that have slow floating point or memory load instructions by allowing other instructions to be issued until the result of the load or floating-point instruction is required. Enabled at levels -O2, -O3. -fschedule-insns2 Similar to -fschedule-insns, but requests an additional pass of instruction scheduling after register allocation has been done. This is especially useful on machines with a relatively small number of registers and where memory load instructions take more than one cycle. Enabled at levels -O2, -O3, -Os. -fno-sched-interblock Disable instruction scheduling across basic blocks, which is normally enabled when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher. -fno-sched-spec Disable speculative motion of non-load instructions, which is normally enabled when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher. -fsched-pressure Enable register pressure sensitive insn scheduling before register allocation. This only makes sense when scheduling before register allocation is enabled, i.e. with -fschedule-insns or at -O2 or higher. Usage of this option can improve the generated code and decrease its size by preventing register pressure increase above the number of available hard registers and subsequent spills in register allocation. -fsched-spec-load Allow speculative motion of some load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher. -fsched-spec-load-dangerous Allow speculative motion of more load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher. -fsched-stalled-insns -fsched-stalled-insns=n Define how many insns (if any) can be moved prematurely from the queue of stalled insns into the ready list during the second scheduling pass. -fno-sched-stalled-insns means that no insns are moved prematurely, -fsched-stalled-insns=0 means there is no limit on how many queued insns can be moved prematurely. -fsched-stalled-insns without a value is equivalent to -fsched-stalled-insns=1. -fsched-stalled-insns-dep -fsched-stalled-insns-dep=n Define how many insn groups (cycles) are examined for a dependency on a stalled insn that is a candidate for premature removal from the queue of stalled insns. This has an effect only during the second scheduling pass, and only if -fsched-stalled-insns is used. -fno-sched-stalled-insns-dep is equivalent to -fsched-stalled-insns-dep=0. -fsched-stalled-insns-dep without a value is equivalent to -fsched-stalled-insns-dep=1. -fsched2-use-superblocks When scheduling after register allocation, use superblock scheduling. This allows motion across basic block boundaries, resulting in faster schedules. This option is experimental, as not all machine descriptions used by GCC model the CPU closely enough to avoid unreliable results from the algorithm. This only makes sense when scheduling after register allocation, i.e. with -fschedule-insns2 or at -O2 or higher. -fsched-group-heuristic Enable the group heuristic in the scheduler. This heuristic favors the instruction that belongs to a schedule group. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-critical-path-heuristic Enable the critical-path heuristic in the scheduler. This heuristic favors instructions on the critical path. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-spec-insn-heuristic Enable the speculative instruction heuristic in the scheduler. This heuristic favors speculative instructions with greater dependency weakness. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-rank-heuristic Enable the rank heuristic in the scheduler. This heuristic favors the instruction belonging to a basic block with greater size or frequency. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-last-insn-heuristic Enable the last-instruction heuristic in the scheduler. This heuristic favors the instruction that is less dependent on the last instruction scheduled. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -fsched-dep-count-heuristic Enable the dependent-count heuristic in the scheduler. This heuristic favors the instruction that has more instructions depending on it. This is enabled by default when scheduling is enabled, i.e. with -fschedule-insns or -fschedule-insns2 or at -O2 or higher. -freschedule-modulo-scheduled-loops Modulo scheduling is performed before traditional scheduling. If a loop is modulo scheduled, later scheduling passes may change its schedule. Use this option to control that behavior. -fselective-scheduling Schedule instructions using selective scheduling algorithm. Selective scheduling runs instead of the first scheduler pass. -fselective-scheduling2 Schedule instructions using selective scheduling algorithm. Selective scheduling runs instead of the second scheduler pass. -fsel-sched-pipelining Enable software pipelining of innermost loops during selective scheduling. This option has no effect unless one of -fselective-scheduling or -fselective-scheduling2 is turned on. -fsel-sched-pipelining-outer-loops When pipelining loops during selective scheduling, also pipeline outer loops. This option has no effect unless -fsel-sched-pipelining is turned on. -fsemantic-interposition Some object formats, like ELF, allow interposing of symbols by the dynamic linker. This means that for symbols exported from the DSO, the compiler cannot perform interprocedural propagation, inlining and other optimizations in anticipation that the function or variable in question may change. While this feature is useful, for example, to rewrite memory allocation functions by a debugging implementation, it is expensive in the terms of code quality. With -fno-semantic-interposition the compiler assumes that if interposition happens for functions the overwriting function will have precisely the same semantics (and side effects). Similarly if interposition happens for variables, the constructor of the variable will be the same. The flag has no effect for functions explicitly declared inline (where it is never allowed for interposition to change semantics) and for symbols explicitly declared weak. -fshrink-wrap Emit function prologues only before parts of the function that need it, rather than at the top of the function. This flag is enabled by default at -O and higher. -fshrink-wrap-separate Shrink-wrap separate parts of the prologue and epilogue separately, so that those parts are only executed when needed. This option is on by default, but has no effect unless -fshrink-wrap is also turned on and the target supports this. -fcaller-saves Enable allocation of values to registers that are clobbered by function calls, by emitting extra instructions to save and restore the registers around such calls. Such allocation is done only when it seems to result in better code. This option is always enabled by default on certain machines, usually those which have no call-preserved registers to use instead. Enabled at levels -O2, -O3, -Os. -fcombine-stack-adjustments Tracks stack adjustments (pushes and pops) and stack memory references and then tries to find ways to combine them. Enabled by default at -O1 and higher. -fipa-ra Use caller save registers for allocation if those registers are not used by any called function. In that case it is not necessary to save and restore them around calls. This is only possible if called functions are part of same compilation unit as current function and they are compiled before it. Enabled at levels -O2, -O3, -Os, however the option is disabled if generated code will be instrumented for profiling (-p, or -pg) or if callee's register usage cannot be known exactly (this happens on targets that do not expose prologues and epilogues in RTL). -fconserve-stack Attempt to minimize stack usage. The compiler attempts to use less stack space, even if that makes the program slower. This option implies setting the large-stack-frame parameter to 100 and the large-stack-frame-growth parameter to 400. -ftree-reassoc Perform reassociation on trees. This flag is enabled by default at -O and higher. -fcode-hoisting Perform code hoisting. Code hoisting tries to move the evaluation of expressions executed on all paths to the function exit as early as possible. This is especially useful as a code size optimization, but it often helps for code speed as well. This flag is enabled by default at -O2 and higher. -ftree-pre Perform partial redundancy elimination (PRE) on trees. This flag is enabled by default at -O2 and -O3. -ftree-partial-pre Make partial redundancy elimination (PRE) more aggressive. This flag is enabled by default at -O3. -ftree-forwprop Perform forward propagation on trees. This flag is enabled by default at -O and higher. -ftree-fre Perform full redundancy elimination (FRE) on trees. The difference between FRE and PRE is that FRE only considers expressions that are computed on all paths leading to the redundant computation. This analysis is faster than PRE, though it exposes fewer redundancies. This flag is enabled by default at -O and higher. -ftree-phiprop Perform hoisting of loads from conditional pointers on trees. This pass is enabled by default at -O and higher. -fhoist-adjacent-loads Speculatively hoist loads from both branches of an if-then- else if the loads are from adjacent locations in the same structure and the target architecture has a conditional move instruction. This flag is enabled by default at -O2 and higher. -ftree-copy-prop Perform copy propagation on trees. This pass eliminates unnecessary copy operations. This flag is enabled by default at -O and higher. -fipa-pure-const Discover which functions are pure or constant. Enabled by default at -O and higher. -fipa-reference Discover which static variables do not escape the compilation unit. Enabled by default at -O and higher. -fipa-reference-addressable Discover read-only, write-only and non-addressable static variables. Enabled by default at -O and higher. -fipa-stack-alignment Reduce stack alignment on call sites if possible. Enabled by default. -fipa-pta Perform interprocedural pointer analysis and interprocedural modification and reference analysis. This option can cause excessive memory and compile-time usage on large compilation units. It is not enabled by default at any optimization level. -fipa-profile Perform interprocedural profile propagation. The functions called only from cold functions are marked as cold. Also functions executed once (such as "cold", "noreturn", static constructors or destructors) are identified. Cold functions and loop less parts of functions executed once are then optimized for size. Enabled by default at -O and higher. -fipa-cp Perform interprocedural constant propagation. This optimization analyzes the program to determine when values passed to functions are constants and then optimizes accordingly. This optimization can substantially increase performance if the application has constants passed to functions. This flag is enabled by default at -O2, -Os and -O3. It is also enabled by -fprofile-use and -fauto-profile. -fipa-cp-clone Perform function cloning to make interprocedural constant propagation stronger. When enabled, interprocedural constant propagation performs function cloning when externally visible function can be called with constant arguments. Because this optimization can create multiple copies of functions, it may significantly increase code size (see --param ipcp-unit-growth=value). This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -fipa-bit-cp When enabled, perform interprocedural bitwise constant propagation. This flag is enabled by default at -O2 and by -fprofile-use and -fauto-profile. It requires that -fipa-cp is enabled. -fipa-vrp When enabled, perform interprocedural propagation of value ranges. This flag is enabled by default at -O2. It requires that -fipa-cp is enabled. -fipa-icf Perform Identical Code Folding for functions and read-only variables. The optimization reduces code size and may disturb unwind stacks by replacing a function by equivalent one with a different name. The optimization works more effectively with link-time optimization enabled. Although the behavior is similar to the Gold Linker's ICF optimization, GCC ICF works on different levels and thus the optimizations are not same - there are equivalences that are found only by GCC and equivalences found only by Gold. This flag is enabled by default at -O2 and -Os. -flive-patching=level Control GCC's optimizations to produce output suitable for live-patching. If the compiler's optimization uses a function's body or information extracted from its body to optimize/change another function, the latter is called an impacted function of the former. If a function is patched, its impacted functions should be patched too. The impacted functions are determined by the compiler's interprocedural optimizations. For example, a caller is impacted when inlining a function into its caller, cloning a function and changing its caller to call this new clone, or extracting a function's pureness/constness information to optimize its direct or indirect callers, etc. Usually, the more IPA optimizations enabled, the larger the number of impacted functions for each function. In order to control the number of impacted functions and more easily compute the list of impacted function, IPA optimizations can be partially enabled at two different levels. The level argument should be one of the following: inline-clone Only enable inlining and cloning optimizations, which includes inlining, cloning, interprocedural scalar replacement of aggregates and partial inlining. As a result, when patching a function, all its callers and its clones' callers are impacted, therefore need to be patched as well. -flive-patching=inline-clone disables the following optimization flags: -fwhole-program -fipa-pta -fipa-reference -fipa-ra -fipa-icf -fipa-icf-functions -fipa-icf-variables -fipa-bit-cp -fipa-vrp -fipa-pure-const -fipa-reference-addressable -fipa-stack-alignment inline-only-static Only enable inlining of static functions. As a result, when patching a static function, all its callers are impacted and so need to be patched as well. In addition to all the flags that -flive-patching=inline-clone disables, -flive-patching=inline-only-static disables the following additional optimization flags: -fipa-cp-clone -fipa-sra -fpartial-inlining -fipa-cp When -flive-patching is specified without any value, the default value is inline-clone. This flag is disabled by default. Note that -flive-patching is not supported with link-time optimization (-flto). -fisolate-erroneous-paths-dereference Detect paths that trigger erroneous or undefined behavior due to dereferencing a null pointer. Isolate those paths from the main control flow and turn the statement with erroneous or undefined behavior into a trap. This flag is enabled by default at -O2 and higher and depends on -fdelete-null-pointer-checks also being enabled. -fisolate-erroneous-paths-attribute Detect paths that trigger erroneous or undefined behavior due to a null value being used in a way forbidden by a "returns_nonnull" or "nonnull" attribute. Isolate those paths from the main control flow and turn the statement with erroneous or undefined behavior into a trap. This is not currently enabled, but may be enabled by -O2 in the future. -ftree-sink Perform forward store motion on trees. This flag is enabled by default at -O and higher. -ftree-bit-ccp Perform sparse conditional bit constant propagation on trees and propagate pointer alignment information. This pass only operates on local scalar variables and is enabled by default at -O1 and higher, except for -Og. It requires that -ftree-ccp is enabled. -ftree-ccp Perform sparse conditional constant propagation (CCP) on trees. This pass only operates on local scalar variables and is enabled by default at -O and higher. -fssa-backprop Propagate information about uses of a value up the definition chain in order to simplify the definitions. For example, this pass strips sign operations if the sign of a value never matters. The flag is enabled by default at -O and higher. -fssa-phiopt Perform pattern matching on SSA PHI nodes to optimize conditional code. This pass is enabled by default at -O1 and higher, except for -Og. -ftree-switch-conversion Perform conversion of simple initializations in a switch to initializations from a scalar array. This flag is enabled by default at -O2 and higher. -ftree-tail-merge Look for identical code sequences. When found, replace one with a jump to the other. This optimization is known as tail merging or cross jumping. This flag is enabled by default at -O2 and higher. The compilation time in this pass can be limited using max-tail-merge-comparisons parameter and max- tail-merge-iterations parameter. -ftree-dce Perform dead code elimination (DCE) on trees. This flag is enabled by default at -O and higher. -ftree-builtin-call-dce Perform conditional dead code elimination (DCE) for calls to built-in functions that may set "errno" but are otherwise free of side effects. This flag is enabled by default at -O2 and higher if -Os is not also specified. -ftree-dominator-opts Perform a variety of simple scalar cleanups (constant/copy propagation, redundancy elimination, range propagation and expression simplification) based on a dominator tree traversal. This also performs jump threading (to reduce jumps to jumps). This flag is enabled by default at -O and higher. -ftree-dse Perform dead store elimination (DSE) on trees. A dead store is a store into a memory location that is later overwritten by another store without any intervening loads. In this case the earlier store can be deleted. This flag is enabled by default at -O and higher. -ftree-ch Perform loop header copying on trees. This is beneficial since it increases effectiveness of code motion optimizations. It also saves one jump. This flag is enabled by default at -O and higher. It is not enabled for -Os, since it usually increases code size. -ftree-loop-optimize Perform loop optimizations on trees. This flag is enabled by default at -O and higher. -ftree-loop-linear -floop-strip-mine -floop-block Perform loop nest optimizations. Same as -floop-nest-optimize. To use this code transformation, GCC has to be configured with --with-isl to enable the Graphite loop transformation infrastructure. -fgraphite-identity Enable the identity transformation for graphite. For every SCoP we generate the polyhedral representation and transform it back to gimple. Using -fgraphite-identity we can check the costs or benefits of the GIMPLE -> GRAPHITE -> GIMPLE transformation. Some minimal optimizations are also performed by the code generator isl, like index splitting and dead code elimination in loops. -floop-nest-optimize Enable the isl based loop nest optimizer. This is a generic loop nest optimizer based on the Pluto optimization algorithms. It calculates a loop structure optimized for data-locality and parallelism. This option is experimental. -floop-parallelize-all Use the Graphite data dependence analysis to identify loops that can be parallelized. Parallelize all the loops that can be analyzed to not contain loop carried dependences without checking that it is profitable to parallelize the loops. -ftree-coalesce-vars While transforming the program out of the SSA representation, attempt to reduce copying by coalescing versions of different user-defined variables, instead of just compiler temporaries. This may severely limit the ability to debug an optimized program compiled with -fno-var-tracking-assignments. In the negated form, this flag prevents SSA coalescing of user variables. This option is enabled by default if optimization is enabled, and it does very little otherwise. -ftree-loop-if-convert Attempt to transform conditional jumps in the innermost loops to branch-less equivalents. The intent is to remove control- flow from the innermost loops in order to improve the ability of the vectorization pass to handle these loops. This is enabled by default if vectorization is enabled. -ftree-loop-distribution Perform loop distribution. This flag can improve cache performance on big loop bodies and allow further loop optimizations, like parallelization or vectorization, to take place. For example, the loop DO I = 1, N A(I) = B(I) + C D(I) = E(I) * F ENDDO is transformed to DO I = 1, N A(I) = B(I) + C ENDDO DO I = 1, N D(I) = E(I) * F ENDDO This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -ftree-loop-distribute-patterns Perform loop distribution of patterns that can be code generated with calls to a library. This flag is enabled by default at -O3, and by -fprofile-use and -fauto-profile. This pass distributes the initialization loops and generates a call to memset zero. For example, the loop DO I = 1, N A(I) = 0 B(I) = A(I) + I ENDDO is transformed to DO I = 1, N A(I) = 0 ENDDO DO I = 1, N B(I) = A(I) + I ENDDO and the initialization loop is transformed into a call to memset zero. This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -floop-interchange Perform loop interchange outside of graphite. This flag can improve cache performance on loop nest and allow further loop optimizations, like vectorization, to take place. For example, the loop for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) for (int k = 0; k < N; k++) c[i][j] = c[i][j] + a[i][k]*b[k][j]; is transformed to for (int i = 0; i < N; i++) for (int k = 0; k < N; k++) for (int j = 0; j < N; j++) c[i][j] = c[i][j] + a[i][k]*b[k][j]; This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -floop-unroll-and-jam Apply unroll and jam transformations on feasible loops. In a loop nest this unrolls the outer loop by some factor and fuses the resulting multiple inner loops. This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -ftree-loop-im Perform loop invariant motion on trees. This pass moves only invariants that are hard to handle at RTL level (function calls, operations that expand to nontrivial sequences of insns). With -funswitch-loops it also moves operands of conditions that are invariant out of the loop, so that we can use just trivial invariantness analysis in loop unswitching. The pass also includes store motion. -ftree-loop-ivcanon Create a canonical counter for number of iterations in loops for which determining number of iterations requires complicated analysis. Later optimizations then may determine the number easily. Useful especially in connection with unrolling. -ftree-scev-cprop Perform final value replacement. If a variable is modified in a loop in such a way that its value when exiting the loop can be determined using only its initial value and the number of loop iterations, replace uses of the final value by such a computation, provided it is sufficiently cheap. This reduces data dependencies and may allow further simplifications. Enabled by default at -O and higher. -fivopts Perform induction variable optimizations (strength reduction, induction variable merging and induction variable elimination) on trees. -ftree-parallelize-loops=n Parallelize loops, i.e., split their iteration space to run in n threads. This is only possible for loops whose iterations are independent and can be arbitrarily reordered. The optimization is only profitable on multiprocessor machines, for loops that are CPU-intensive, rather than constrained e.g. by memory bandwidth. This option implies -pthread, and thus is only supported on targets that have support for -pthread. -ftree-pta Perform function-local points-to analysis on trees. This flag is enabled by default at -O1 and higher, except for -Og. -ftree-sra Perform scalar replacement of aggregates. This pass replaces structure references with scalars to prevent committing structures to memory too early. This flag is enabled by default at -O1 and higher, except for -Og. -fstore-merging Perform merging of narrow stores to consecutive memory addresses. This pass merges contiguous stores of immediate values narrower than a word into fewer wider stores to reduce the number of instructions. This is enabled by default at -O2 and higher as well as -Os. -ftree-ter Perform temporary expression replacement during the SSA->normal phase. Single use/single def temporaries are replaced at their use location with their defining expression. This results in non-GIMPLE code, but gives the expanders much more complex trees to work on resulting in better RTL generation. This is enabled by default at -O and higher. -ftree-slsr Perform straight-line strength reduction on trees. This recognizes related expressions involving multiplications and replaces them by less expensive calculations when possible. This is enabled by default at -O and higher. -ftree-vectorize Perform vectorization on trees. This flag enables -ftree-loop-vectorize and -ftree-slp-vectorize if not explicitly specified. -ftree-loop-vectorize Perform loop vectorization on trees. This flag is enabled by default at -O3 and by -ftree-vectorize, -fprofile-use, and -fauto-profile. -ftree-slp-vectorize Perform basic block vectorization on trees. This flag is enabled by default at -O3 and by -ftree-vectorize, -fprofile-use, and -fauto-profile. -fvect-cost-model=model Alter the cost model used for vectorization. The model argument should be one of unlimited, dynamic or cheap. With the unlimited model the vectorized code-path is assumed to be profitable while with the dynamic model a runtime check guards the vectorized code-path to enable it only for iteration counts that will likely execute faster than when executing the original scalar loop. The cheap model disables vectorization of loops where doing so would be cost prohibitive for example due to required runtime checks for data dependence or alignment but otherwise is equal to the dynamic model. The default cost model depends on other optimization flags and is either dynamic or cheap. -fsimd-cost-model=model Alter the cost model used for vectorization of loops marked with the OpenMP simd directive. The model argument should be one of unlimited, dynamic, cheap. All values of model have the same meaning as described in -fvect-cost-model and by default a cost model defined with -fvect-cost-model is used. -ftree-vrp Perform Value Range Propagation on trees. This is similar to the constant propagation pass, but instead of values, ranges of values are propagated. This allows the optimizers to remove unnecessary range checks like array bound checks and null pointer checks. This is enabled by default at -O2 and higher. Null pointer check elimination is only done if -fdelete-null-pointer-checks is enabled. -fsplit-paths Split paths leading to loop backedges. This can improve dead code elimination and common subexpression elimination. This is enabled by default at -O3 and above. -fsplit-ivs-in-unroller Enables expression of values of induction variables in later iterations of the unrolled loop using the value in the first iteration. This breaks long dependency chains, thus improving efficiency of the scheduling passes. A combination of -fweb and CSE is often sufficient to obtain the same effect. However, that is not reliable in cases where the loop body is more complicated than a single basic block. It also does not work at all on some architectures due to restrictions in the CSE pass. This optimization is enabled by default. -fvariable-expansion-in-unroller With this option, the compiler creates multiple copies of some local variables when unrolling a loop, which can result in superior code. -fpartial-inlining Inline parts of functions. This option has any effect only when inlining itself is turned on by the -finline-functions or -finline-small-functions options. Enabled at levels -O2, -O3, -Os. -fpredictive-commoning Perform predictive commoning optimization, i.e., reusing computations (especially memory loads and stores) performed in previous iterations of loops. This option is enabled at level -O3. It is also enabled by -fprofile-use and -fauto-profile. -fprefetch-loop-arrays If supported by the target machine, generate instructions to prefetch memory to improve the performance of loops that access large arrays. This option may generate better or worse code; results are highly dependent on the structure of loops within the source code. Disabled at level -Os. -fno-printf-return-value Do not substitute constants for known return value of formatted output functions such as "sprintf", "snprintf", "vsprintf", and "vsnprintf" (but not "printf" of "fprintf"). This transformation allows GCC to optimize or even eliminate branches based on the known return value of these functions called with arguments that are either constant, or whose values are known to be in a range that makes determining the exact return value possible. For example, when -fprintf-return-value is in effect, both the branch and the body of the "if" statement (but not the call to "snprint") can be optimized away when "i" is a 32-bit or smaller integer because the return value is guaranteed to be at most 8. char buf[9]; if (snprintf (buf, "%08x", i) >= sizeof buf) ... The -fprintf-return-value option relies on other optimizations and yields best results with -O2 and above. It works in tandem with the -Wformat-overflow and -Wformat-truncation options. The -fprintf-return-value option is enabled by default. -fno-peephole -fno-peephole2 Disable any machine-specific peephole optimizations. The difference between -fno-peephole and -fno-peephole2 is in how they are implemented in the compiler; some targets use one, some use the other, a few use both. -fpeephole is enabled by default. -fpeephole2 enabled at levels -O2, -O3, -Os. -fno-guess-branch-probability Do not guess branch probabilities using heuristics. GCC uses heuristics to guess branch probabilities if they are not provided by profiling feedback (-fprofile-arcs). These heuristics are based on the control flow graph. If some branch probabilities are specified by "__builtin_expect", then the heuristics are used to guess branch probabilities for the rest of the control flow graph, taking the "__builtin_expect" info into account. The interactions between the heuristics and "__builtin_expect" can be complex, and in some cases, it may be useful to disable the heuristics so that the effects of "__builtin_expect" are easier to understand. It is also possible to specify expected probability of the expression with "__builtin_expect_with_probability" built-in function. The default is -fguess-branch-probability at levels -O, -O2, -O3, -Os. -freorder-blocks Reorder basic blocks in the compiled function in order to reduce number of taken branches and improve code locality. Enabled at levels -O, -O2, -O3, -Os. -freorder-blocks-algorithm=algorithm Use the specified algorithm for basic block reordering. The algorithm argument can be simple, which does not increase code size (except sometimes due to secondary effects like alignment), or stc, the "software trace cache" algorithm, which tries to put all often executed code together, minimizing the number of branches executed by making extra copies of code. The default is simple at levels -O, -Os, and stc at levels -O2, -O3. -freorder-blocks-and-partition In addition to reordering basic blocks in the compiled function, in order to reduce number of taken branches, partitions hot and cold basic blocks into separate sections of the assembly and .o files, to improve paging and cache locality performance. This optimization is automatically turned off in the presence of exception handling or unwind tables (on targets using setjump/longjump or target specific scheme), for linkonce sections, for functions with a user-defined section attribute and on any architecture that does not support named sections. When -fsplit-stack is used this option is not enabled by default (to avoid linker errors), but may be enabled explicitly (if using a working linker). Enabled for x86 at levels -O2, -O3, -Os. -freorder-functions Reorder functions in the object file in order to improve code locality. This is implemented by using special subsections ".text.hot" for most frequently executed functions and ".text.unlikely" for unlikely executed functions. Reordering is done by the linker so object file format must support named sections and linker must place them in a reasonable way. This option isn't effective unless you either provide profile feedback (see -fprofile-arcs for details) or manually annotate functions with "hot" or "cold" attributes. Enabled at levels -O2, -O3, -Os. -fstrict-aliasing Allow the compiler to assume the strictest aliasing rules applicable to the language being compiled. For C (and C++), this activates optimizations based on the type of expressions. In particular, an object of one type is assumed never to reside at the same address as an object of a different type, unless the types are almost the same. For example, an "unsigned int" can alias an "int", but not a "void*" or a "double". A character type may alias any other type. Pay special attention to code like this: union a_union { int i; double d; }; int f() { union a_union t; t.d = 3.0; return t.i; } The practice of reading from a different union member than the one most recently written to (called "type-punning") is common. Even with -fstrict-aliasing, type-punning is allowed, provided the memory is accessed through the union type. So, the code above works as expected. However, this code might not: int f() { union a_union t; int* ip; t.d = 3.0; ip = &t.i; return *ip; } Similarly, access by taking the address, casting the resulting pointer and dereferencing the result has undefined behavior, even if the cast uses a union type, e.g.: int f() { double d = 3.0; return ((union a_union *) &d)->i; } The -fstrict-aliasing option is enabled at levels -O2, -O3, -Os. -falign-functions -falign-functions=n -falign-functions=n:m -falign-functions=n:m:n2 -falign-functions=n:m:n2:m2 Align the start of functions to the next power-of-two greater than n, skipping up to m-1 bytes. This ensures that at least the first m bytes of the function can be fetched by the CPU without crossing an n-byte alignment boundary. If m is not specified, it defaults to n. Examples: -falign-functions=32 aligns functions to the next 32-byte boundary, -falign-functions=24 aligns to the next 32-byte boundary only if this can be done by skipping 23 bytes or less, -falign-functions=32:7 aligns to the next 32-byte boundary only if this can be done by skipping 6 bytes or less. The second pair of n2:m2 values allows you to specify a secondary alignment: -falign-functions=64:7:32:3 aligns to the next 64-byte boundary if this can be done by skipping 6 bytes or less, otherwise aligns to the next 32-byte boundary if this can be done by skipping 2 bytes or less. If m2 is not specified, it defaults to n2. Some assemblers only support this flag when n is a power of two; in that case, it is rounded up. -fno-align-functions and -falign-functions=1 are equivalent and mean that functions are not aligned. If n is not specified or is zero, use a machine-dependent default. The maximum allowed n option value is 65536. Enabled at levels -O2, -O3. -flimit-function-alignment If this option is enabled, the compiler tries to avoid unnecessarily overaligning functions. It attempts to instruct the assembler to align by the amount specified by -falign-functions, but not to skip more bytes than the size of the function. -falign-labels -falign-labels=n -falign-labels=n:m -falign-labels=n:m:n2 -falign-labels=n:m:n2:m2 Align all branch targets to a power-of-two boundary. Parameters of this option are analogous to the -falign-functions option. -fno-align-labels and -falign-labels=1 are equivalent and mean that labels are not aligned. If -falign-loops or -falign-jumps are applicable and are greater than this value, then their values are used instead. If n is not specified or is zero, use a machine-dependent default which is very likely to be 1, meaning no alignment. The maximum allowed n option value is 65536. Enabled at levels -O2, -O3. -falign-loops -falign-loops=n -falign-loops=n:m -falign-loops=n:m:n2 -falign-loops=n:m:n2:m2 Align loops to a power-of-two boundary. If the loops are executed many times, this makes up for any execution of the dummy padding instructions. Parameters of this option are analogous to the -falign-functions option. -fno-align-loops and -falign-loops=1 are equivalent and mean that loops are not aligned. The maximum allowed n option value is 65536. If n is not specified or is zero, use a machine-dependent default. Enabled at levels -O2, -O3. -falign-jumps -falign-jumps=n -falign-jumps=n:m -falign-jumps=n:m:n2 -falign-jumps=n:m:n2:m2 Align branch targets to a power-of-two boundary, for branch targets where the targets can only be reached by jumping. In this case, no dummy operations need be executed. Parameters of this option are analogous to the -falign-functions option. -fno-align-jumps and -falign-jumps=1 are equivalent and mean that loops are not aligned. If n is not specified or is zero, use a machine-dependent default. The maximum allowed n option value is 65536. Enabled at levels -O2, -O3. -funit-at-a-time This option is left for compatibility reasons. -funit-at-a-time has no effect, while -fno-unit-at-a-time implies -fno-toplevel-reorder and -fno-section-anchors. Enabled by default. -fno-toplevel-reorder Do not reorder top-level functions, variables, and "asm" statements. Output them in the same order that they appear in the input file. When this option is used, unreferenced static variables are not removed. This option is intended to support existing code that relies on a particular ordering. For new code, it is better to use attributes when possible. -ftoplevel-reorder is the default at -O1 and higher, and also at -O0 if -fsection-anchors is explicitly requested. Additionally -fno-toplevel-reorder implies -fno-section-anchors. -fweb Constructs webs as commonly used for register allocation purposes and assign each web individual pseudo register. This allows the register allocation pass to operate on pseudos directly, but also strengthens several other optimization passes, such as CSE, loop optimizer and trivial dead code remover. It can, however, make debugging impossible, since variables no longer stay in a "home register". Enabled by default with -funroll-loops. -fwhole-program Assume that the current compilation unit represents the whole program being compiled. All public functions and variables with the exception of "main" and those merged by attribute "externally_visible" become static functions and in effect are optimized more aggressively by interprocedural optimizers. This option should not be used in combination with -flto. Instead relying on a linker plugin should provide safer and more precise information. -flto[=n] This option runs the standard link-time optimizer. When invoked with source code, it generates GIMPLE (one of GCC's internal representations) and writes it to special ELF sections in the object file. When the object files are linked together, all the function bodies are read from these ELF sections and instantiated as if they had been part of the same translation unit. To use the link-time optimizer, -flto and optimization options should be specified at compile time and during the final link. It is recommended that you compile all the files participating in the same link with the same options and also specify those options at link time. For example: gcc -c -O2 -flto foo.c gcc -c -O2 -flto bar.c gcc -o myprog -flto -O2 foo.o bar.o The first two invocations to GCC save a bytecode representation of GIMPLE into special ELF sections inside foo.o and bar.o. The final invocation reads the GIMPLE bytecode from foo.o and bar.o, merges the two files into a single internal image, and compiles the result as usual. Since both foo.o and bar.o are merged into a single image, this causes all the interprocedural analyses and optimizations in GCC to work across the two files as if they were a single one. This means, for example, that the inliner is able to inline functions in bar.o into functions in foo.o and vice-versa. Another (simpler) way to enable link-time optimization is: gcc -o myprog -flto -O2 foo.c bar.c The above generates bytecode for foo.c and bar.c, merges them together into a single GIMPLE representation and optimizes them as usual to produce myprog. The important thing to keep in mind is that to enable link- time optimizations you need to use the GCC driver to perform the link step. GCC automatically performs link-time optimization if any of the objects involved were compiled with the -flto command-line option. You can always override the automatic decision to do link-time optimization by passing -fno-lto to the link command. To make whole program optimization effective, it is necessary to make certain whole program assumptions. The compiler needs to know what functions and variables can be accessed by libraries and runtime outside of the link-time optimized unit. When supported by the linker, the linker plugin (see -fuse-linker-plugin) passes information to the compiler about used and externally visible symbols. When the linker plugin is not available, -fwhole-program should be used to allow the compiler to make these assumptions, which leads to more aggressive optimization decisions. When a file is compiled with -flto without -fuse-linker-plugin, the generated object file is larger than a regular object file because it contains GIMPLE bytecodes and the usual final code (see -ffat-lto-objects. This means that object files with LTO information can be linked as normal object files; if -fno-lto is passed to the linker, no interprocedural optimizations are applied. Note that when -fno-fat-lto-objects is enabled the compile stage is faster but you cannot perform a regular, non-LTO link on them. When producing the final binary, GCC only applies link-time optimizations to those files that contain bytecode. Therefore, you can mix and match object files and libraries with GIMPLE bytecodes and final object code. GCC automatically selects which files to optimize in LTO mode and which files to link without further processing. Generally, options specified at link time override those specified at compile time, although in some cases GCC attempts to infer link-time options from the settings used to compile the input files. If you do not specify an optimization level option -O at link time, then GCC uses the highest optimization level used when compiling the object files. Note that it is generally ineffective to specify an optimization level option only at link time and not at compile time, for two reasons. First, compiling without optimization suppresses compiler passes that gather information needed for effective optimization at link time. Second, some early optimization passes can be performed only at compile time and not at link time. There are some code generation flags preserved by GCC when generating bytecodes, as they need to be used during the final link. Currently, the following options and their settings are taken from the first object file that explicitly specifies them: -fPIC, -fpic, -fpie, -fcommon, -fexceptions, -fnon-call-exceptions, -fgnu-tm and all the -m target flags. Certain ABI-changing flags are required to match in all compilation units, and trying to override this at link time with a conflicting value is ignored. This includes options such as -freg-struct-return and -fpcc-struct-return. Other options such as -ffp-contract, -fno-strict-overflow, -fwrapv, -fno-trapv or -fno-strict-aliasing are passed through to the link stage and merged conservatively for conflicting translation units. Specifically -fno-strict-overflow, -fwrapv and -fno-trapv take precedence; and for example -ffp-contract=off takes precedence over -ffp-contract=fast. You can override them at link time. When you need to pass options to the assembler via -Wa or -Xassembler make sure to either compile such translation units with -fno-lto or consistently use the same assembler options on all translation units. You can alternatively also specify assembler options at LTO link time. If LTO encounters objects with C linkage declared with incompatible types in separate translation units to be linked together (undefined behavior according to ISO C99 6.2.7), a non-fatal diagnostic may be issued. The behavior is still undefined at run time. Similar diagnostics may be raised for other languages. Another feature of LTO is that it is possible to apply interprocedural optimizations on files written in different languages: gcc -c -flto foo.c g++ -c -flto bar.cc gfortran -c -flto baz.f90 g++ -o myprog -flto -O3 foo.o bar.o baz.o -lgfortran Notice that the final link is done with g++ to get the C++ runtime libraries and -lgfortran is added to get the Fortran runtime libraries. In general, when mixing languages in LTO mode, you should use the same link command options as when mixing languages in a regular (non-LTO) compilation. If object files containing GIMPLE bytecode are stored in a library archive, say libfoo.a, it is possible to extract and use them in an LTO link if you are using a linker with plugin support. To create static libraries suitable for LTO, use gcc-ar and gcc-ranlib instead of ar and ranlib; to show the symbols of object files with GIMPLE bytecode, use gcc-nm. Those commands require that ar, ranlib and nm have been compiled with plugin support. At link time, use the flag -fuse-linker-plugin to ensure that the library participates in the LTO optimization process: gcc -o myprog -O2 -flto -fuse-linker-plugin a.o b.o -lfoo With the linker plugin enabled, the linker extracts the needed GIMPLE files from libfoo.a and passes them on to the running GCC to make them part of the aggregated GIMPLE image to be optimized. If you are not using a linker with plugin support and/or do not enable the linker plugin, then the objects inside libfoo.a are extracted and linked as usual, but they do not participate in the LTO optimization process. In order to make a static library suitable for both LTO optimization and usual linkage, compile its object files with -flto -ffat-lto-objects. Link-time optimizations do not require the presence of the whole program to operate. If the program does not require any symbols to be exported, it is possible to combine -flto and -fwhole-program to allow the interprocedural optimizers to use more aggressive assumptions which may lead to improved optimization opportunities. Use of -fwhole-program is not needed when linker plugin is active (see -fuse-linker-plugin). The current implementation of LTO makes no attempt to generate bytecode that is portable between different types of hosts. The bytecode files are versioned and there is a strict version check, so bytecode files generated in one version of GCC do not work with an older or newer version of GCC. Link-time optimization does not work well with generation of debugging information on systems other than those using a combination of ELF and DWARF. If you specify the optional n, the optimization and code generation done at link time is executed in parallel using n parallel jobs by utilizing an installed make program. The environment variable MAKE may be used to override the program used. The default value for n is 1. You can also specify -flto=jobserver to use GNU make's job server mode to determine the number of parallel jobs. This is useful when the Makefile calling GCC is already executing in parallel. You must prepend a + to the command recipe in the parent Makefile for this to work. This option likely only works if MAKE is GNU make. -flto-partition=alg Specify the partitioning algorithm used by the link-time optimizer. The value is either 1to1 to specify a partitioning mirroring the original source files or balanced to specify partitioning into equally sized chunks (whenever possible) or max to create new partition for every symbol where possible. Specifying none as an algorithm disables partitioning and streaming completely. The default value is balanced. While 1to1 can be used as an workaround for various code ordering issues, the max partitioning is intended for internal testing only. The value one specifies that exactly one partition should be used while the value none bypasses partitioning and executes the link-time optimization step directly from the WPA phase. -flto-odr-type-merging Enable streaming of mangled types names of C++ types and their unification at link time. This increases size of LTO object files, but enables diagnostics about One Definition Rule violations. -flto-compression-level=n This option specifies the level of compression used for intermediate language written to LTO object files, and is only meaningful in conjunction with LTO mode (-flto). Valid values are 0 (no compression) to 9 (maximum compression). Values outside this range are clamped to either 0 or 9. If the option is not given, a default balanced compression setting is used. -fuse-linker-plugin Enables the use of a linker plugin during link-time optimization. This option relies on plugin support in the linker, which is available in gold or in GNU ld 2.21 or newer. This option enables the extraction of object files with GIMPLE bytecode out of library archives. This improves the quality of optimization by exposing more code to the link- time optimizer. This information specifies what symbols can be accessed externally (by non-LTO object or during dynamic linking). Resulting code quality improvements on binaries (and shared libraries that use hidden visibility) are similar to -fwhole-program. See -flto for a description of the effect of this flag and how to use it. This option is enabled by default when LTO support in GCC is enabled and GCC was configured for use with a linker supporting plugins (GNU ld 2.21 or newer or gold). -ffat-lto-objects Fat LTO objects are object files that contain both the intermediate language and the object code. This makes them usable for both LTO linking and normal linking. This option is effective only when compiling with -flto and is ignored at link time. -fno-fat-lto-objects improves compilation time over plain LTO, but requires the complete toolchain to be aware of LTO. It requires a linker with linker plugin support for basic functionality. Additionally, nm, ar and ranlib need to support linker plugins to allow a full-featured build environment (capable of building static libraries etc). GCC provides the gcc-ar, gcc-nm, gcc-ranlib wrappers to pass the right options to these tools. With non fat LTO makefiles need to be modified to use them. Note that modern binutils provide plugin auto-load mechanism. Installing the linker plugin into $libdir/bfd-plugins has the same effect as usage of the command wrappers (gcc-ar, gcc-nm and gcc-ranlib). The default is -fno-fat-lto-objects on targets with linker plugin support. -fcompare-elim After register allocation and post-register allocation instruction splitting, identify arithmetic instructions that compute processor flags similar to a comparison operation based on that arithmetic. If possible, eliminate the explicit comparison operation. This pass only applies to certain targets that cannot explicitly represent the comparison operation before register allocation is complete. Enabled at levels -O, -O2, -O3, -Os. -fcprop-registers After register allocation and post-register allocation instruction splitting, perform a copy-propagation pass to try to reduce scheduling dependencies and occasionally eliminate the copy. Enabled at levels -O, -O2, -O3, -Os. -fprofile-correction Profiles collected using an instrumented binary for multi- threaded programs may be inconsistent due to missed counter updates. When this option is specified, GCC uses heuristics to correct or smooth out such inconsistencies. By default, GCC emits an error message when an inconsistent profile is detected. This option is enabled by -fauto-profile. -fprofile-use -fprofile-use=path Enable profile feedback-directed optimizations, and the following optimizations, many of which are generally profitable only with profile feedback available: -fbranch-probabilities -fprofile-values -funroll-loops -fpeel-loops -ftracer -fvpt -finline-functions -fipa-cp -fipa-cp-clone -fipa-bit-cp -fpredictive-commoning -fsplit-loops -funswitch-loops -fgcse-after-reload -ftree-loop-vectorize -ftree-slp-vectorize -fvect-cost-model=dynamic -ftree-loop-distribute-patterns -fprofile-reorder-functions Before you can use this option, you must first generate profiling information. By default, GCC emits an error message if the feedback profiles do not match the source code. This error can be turned into a warning by using -Wno-error=coverage-mismatch. Note this may result in poorly optimized code. Additionally, by default, GCC also emits a warning message if the feedback profiles do not exist (see -Wmissing-profile). If path is specified, GCC looks at the path to find the profile feedback data files. See -fprofile-dir. -fauto-profile -fauto-profile=path Enable sampling-based feedback-directed optimizations, and the following optimizations, many of which are generally profitable only with profile feedback available: -fbranch-probabilities -fprofile-values -funroll-loops -fpeel-loops -ftracer -fvpt -finline-functions -fipa-cp -fipa-cp-clone -fipa-bit-cp -fpredictive-commoning -fsplit-loops -funswitch-loops -fgcse-after-reload -ftree-loop-vectorize -ftree-slp-vectorize -fvect-cost-model=dynamic -ftree-loop-distribute-patterns -fprofile-correction path is the name of a file containing AutoFDO profile information. If omitted, it defaults to fbdata.afdo in the current directory. Producing an AutoFDO profile data file requires running your program with the perf utility on a supported GNU/Linux target system. For more information, see <https://perf.wiki.kernel.org/ >. E.g. perf record -e br_inst_retired:near_taken -b -o perf.data \ -- your_program Then use the create_gcov tool to convert the raw profile data to a format that can be used by GCC. You must also supply the unstripped binary for your program to this tool. See <https://github.com/google/autofdo >. E.g. create_gcov --binary=your_program.unstripped --profile=perf.data \ --gcov=profile.afdo The following options control compiler behavior regarding floating-point arithmetic. These options trade off between speed and correctness. All must be specifically enabled. -ffloat-store Do not store floating-point variables in registers, and inhibit other options that might change whether a floating- point value is taken from a register or memory. This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a "double" is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables. -fexcess-precision=style This option allows further control over excess precision on machines where floating-point operations occur in a format with more precision or range than the IEEE standard and interchange floating-point types. By default, -fexcess-precision=fast is in effect; this means that operations may be carried out in a wider precision than the types specified in the source if that would result in faster code, and it is unpredictable when rounding to the types specified in the source code takes place. When compiling C, if -fexcess-precision=standard is specified then excess precision follows the rules specified in ISO C99; in particular, both casts and assignments cause values to be rounded to their semantic types (whereas -ffloat-store only affects assignments). This option is enabled by default for C if a strict conformance option such as -std=c99 is used. -ffast-math enables -fexcess-precision=fast by default regardless of whether a strict conformance option is used. -fexcess-precision=standard is not implemented for languages other than C. On the x86, it has no effect if -mfpmath=sse or -mfpmath=sse+387 is specified; in the former case, IEEE semantics apply without excess precision, and in the latter, rounding is unpredictable. -ffast-math Sets the options -fno-math-errno, -funsafe-math-optimizations, -ffinite-math-only, -fno-rounding-math, -fno-signaling-nans, -fcx-limited-range and -fexcess-precision=fast. This option causes the preprocessor macro "__FAST_MATH__" to be defined. This option is not turned on by any -O option besides -Ofast since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. -fno-math-errno Do not set "errno" after calling math functions that are executed with a single instruction, e.g., "sqrt". A program that relies on IEEE exceptions for math error handling may want to use this flag for speed while maintaining IEEE arithmetic compatibility. This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. The default is -fmath-errno. On Darwin systems, the math library never sets "errno". There is therefore no reason for the compiler to consider the possibility that it might, and -fno-math-errno is the default. -funsafe-math-optimizations Allow optimizations for floating-point arithmetic that (a) assume that arguments and results are valid and (b) may violate IEEE or ANSI standards. When used at link time, it may include libraries or startup files that change the default FPU control word or other similar optimizations. This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. Enables -fno-signed-zeros, -fno-trapping-math, -fassociative-math and -freciprocal-math. The default is -fno-unsafe-math-optimizations. -fassociative-math Allow re-association of operands in series of floating-point operations. This violates the ISO C and C++ language standard by possibly changing computation result. NOTE: re- ordering may change the sign of zero as well as ignore NaNs and inhibit or create underflow or overflow (and thus cannot be used on code that relies on rounding behavior like "(x + 2**52) - 2**52". May also reorder floating-point comparisons and thus may not be used when ordered comparisons are required. This option requires that both -fno-signed-zeros and -fno-trapping-math be in effect. Moreover, it doesn't make much sense with -frounding-math. For Fortran the option is automatically enabled when both -fno-signed-zeros and -fno-trapping-math are in effect. The default is -fno-associative-math. -freciprocal-math Allow the reciprocal of a value to be used instead of dividing by the value if this enables optimizations. For example "x / y" can be replaced with "x * (1/y)", which is useful if "(1/y)" is subject to common subexpression elimination. Note that this loses precision and increases the number of flops operating on the value. The default is -fno-reciprocal-math. -ffinite-math-only Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs. This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. The default is -fno-finite-math-only. -fno-signed-zeros Allow optimizations for floating-point arithmetic that ignore the signedness of zero. IEEE arithmetic specifies the behavior of distinct +0.0 and -0.0 values, which then prohibits simplification of expressions such as x+0.0 or 0.0*x (even with -ffinite-math-only). This option implies that the sign of a zero result isn't significant. The default is -fsigned-zeros. -fno-trapping-math Compile code assuming that floating-point operations cannot generate user-visible traps. These traps include division by zero, overflow, underflow, inexact result and invalid operation. This option requires that -fno-signaling-nans be in effect. Setting this option may allow faster code if one relies on "non-stop" IEEE arithmetic, for example. This option should never be turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. The default is -ftrapping-math. -frounding-math Disable transformations and optimizations that assume default floating-point rounding behavior. This is round-to-zero for all floating point to integer conversions, and round-to- nearest for all other arithmetic truncations. This option should be specified for programs that change the FP rounding mode dynamically, or that may be executed with a non-default rounding mode. This option disables constant folding of floating-point expressions at compile time (which may be affected by rounding mode) and arithmetic transformations that are unsafe in the presence of sign-dependent rounding modes. The default is -fno-rounding-math. This option is experimental and does not currently guarantee to disable all GCC optimizations that are affected by rounding mode. Future versions of GCC may provide finer control of this setting using C99's "FENV_ACCESS" pragma. This command-line option will be used to specify the default state for "FENV_ACCESS". -fsignaling-nans Compile code assuming that IEEE signaling NaNs may generate user-visible traps during floating-point operations. Setting this option disables optimizations that may change the number of exceptions visible with signaling NaNs. This option implies -ftrapping-math. This option causes the preprocessor macro "__SUPPORT_SNAN__" to be defined. The default is -fno-signaling-nans. This option is experimental and does not currently guarantee to disable all GCC optimizations that affect signaling NaN behavior. -fno-fp-int-builtin-inexact Do not allow the built-in functions "ceil", "floor", "round" and "trunc", and their "float" and "long double" variants, to generate code that raises the "inexact" floating-point exception for noninteger arguments. ISO C99 and C11 allow these functions to raise the "inexact" exception, but ISO/IEC TS 18661-1:2014, the C bindings to IEEE 754-2008, does not allow these functions to do so. The default is -ffp-int-builtin-inexact, allowing the exception to be raised. This option does nothing unless -ftrapping-math is in effect. Even if -fno-fp-int-builtin-inexact is used, if the functions generate a call to a library function then the "inexact" exception may be raised if the library implementation does not follow TS 18661. -fsingle-precision-constant Treat floating-point constants as single precision instead of implicitly converting them to double-precision constants. -fcx-limited-range When enabled, this option states that a range reduction step is not needed when performing complex division. Also, there is no checking whether the result of a complex multiplication or division is "NaN + I*NaN", with an attempt to rescue the situation in that case. The default is -fno-cx-limited-range, but is enabled by -ffast-math. This option controls the default setting of the ISO C99 "CX_LIMITED_RANGE" pragma. Nevertheless, the option applies to all languages. -fcx-fortran-rules Complex multiplication and division follow Fortran rules. Range reduction is done as part of complex division, but there is no checking whether the result of a complex multiplication or division is "NaN + I*NaN", with an attempt to rescue the situation in that case. The default is -fno-cx-fortran-rules. The following options control optimizations that may improve performance, but are not enabled by any -O options. This section includes experimental options that may produce broken code. -fbranch-probabilities After running a program compiled with -fprofile-arcs, you can compile it a second time using -fbranch-probabilities, to improve optimizations based on the number of times each branch was taken. When a program compiled with -fprofile-arcs exits, it saves arc execution counts to a file called sourcename.gcda for each source file. The information in this data file is very dependent on the structure of the generated code, so you must use the same source code and the same optimization options for both compilations. With -fbranch-probabilities, GCC puts a REG_BR_PROB note on each JUMP_INSN and CALL_INSN. These can be used to improve optimization. Currently, they are only used in one place: in reorg.c, instead of guessing which path a branch is most likely to take, the REG_BR_PROB values are used to exactly determine which path is taken more often. Enabled by -fprofile-use and -fauto-profile. -fprofile-values If combined with -fprofile-arcs, it adds code so that some data about values of expressions in the program is gathered. With -fbranch-probabilities, it reads back the data gathered from profiling values of expressions for usage in optimizations. Enabled by -fprofile-generate, -fprofile-use, and -fauto-profile. -fprofile-reorder-functions Function reordering based on profile instrumentation collects first time of execution of a function and orders these functions in ascending order. Enabled with -fprofile-use. -fvpt If combined with -fprofile-arcs, this option instructs the compiler to add code to gather information about values of expressions. With -fbranch-probabilities, it reads back the data gathered and actually performs the optimizations based on them. Currently the optimizations include specialization of division operations using the knowledge about the value of the denominator. Enabled with -fprofile-use and -fauto-profile. -frename-registers Attempt to avoid false dependencies in scheduled code by making use of registers left over after register allocation. This optimization most benefits processors with lots of registers. Depending on the debug information format adopted by the target, however, it can make debugging impossible, since variables no longer stay in a "home register". Enabled by default with -funroll-loops. -fschedule-fusion Performs a target dependent pass over the instruction stream to schedule instructions of same type together because target machine can execute them more efficiently if they are adjacent to each other in the instruction flow. Enabled at levels -O2, -O3, -Os. -ftracer Perform tail duplication to enlarge superblock size. This transformation simplifies the control flow of the function allowing other optimizations to do a better job. Enabled by -fprofile-use and -fauto-profile. -funroll-loops Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop. -funroll-loops implies -frerun-cse-after-loop, -fweb and -frename-registers. It also turns on complete loop peeling (i.e. complete removal of loops with a small constant number of iterations). This option makes code larger, and may or may not make it run faster. Enabled by -fprofile-use and -fauto-profile. -funroll-all-loops Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This usually makes programs run more slowly. -funroll-all-loops implies the same options as -funroll-loops. -fpeel-loops Peels loops for which there is enough information that they do not roll much (from profile feedback or static analysis). It also turns on complete loop peeling (i.e. complete removal of loops with small constant number of iterations). Enabled by -O3, -fprofile-use, and -fauto-profile. -fmove-loop-invariants Enables the loop invariant motion pass in the RTL loop optimizer. Enabled at level -O1 and higher, except for -Og. -fsplit-loops Split a loop into two if it contains a condition that's always true for one side of the iteration space and false for the other. Enabled by -fprofile-use and -fauto-profile. -funswitch-loops Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition). Enabled by -fprofile-use and -fauto-profile. -fversion-loops-for-strides If a loop iterates over an array with a variable stride, create another version of the loop that assumes the stride is always one. For example: for (int i = 0; i < n; ++i) x[i * stride] = ...; becomes: if (stride == 1) for (int i = 0; i < n; ++i) x[i] = ...; else for (int i = 0; i < n; ++i) x[i * stride] = ...; This is particularly useful for assumed-shape arrays in Fortran where (for example) it allows better vectorization assuming contiguous accesses. This flag is enabled by default at -O3. It is also enabled by -fprofile-use and -fauto-profile. -ffunction-sections -fdata-sections Place each function or data item into its own section in the output file if the target supports arbitrary sections. The name of the function or the name of the data item determines the section's name in the output file. Use these options on systems where the linker can perform optimizations to improve locality of reference in the instruction space. Most systems using the ELF object format have linkers with such optimizations. On AIX, the linker rearranges sections (CSECTs) based on the call graph. The performance impact varies. Together with a linker garbage collection (linker --gc-sections option) these options may lead to smaller statically-linked executables (after stripping). On ELF/DWARF systems these options do not degenerate the quality of the debug information. There could be issues with other object files/debug info formats. Only use these options when there are significant benefits from doing so. When you specify these options, the assembler and linker create larger object and executable files and are also slower. These options affect code generation. They prevent optimizations by the compiler and assembler using relative locations inside a translation unit since the locations are unknown until link time. An example of such an optimization is relaxing calls to short call instructions. -fbranch-target-load-optimize Perform branch target register load optimization before prologue / epilogue threading. The use of target registers can typically be exposed only during reload, thus hoisting loads out of loops and doing inter-block scheduling needs a separate optimization pass. -fbranch-target-load-optimize2 Perform branch target register load optimization after prologue / epilogue threading. -fbtr-bb-exclusive When performing branch target register load optimization, don't reuse branch target registers within any basic block. -fstdarg-opt Optimize the prologue of variadic argument functions with respect to usage of those arguments. -fsection-anchors Try to reduce the number of symbolic address calculations by using shared "anchor" symbols to address nearby objects. This transformation can help to reduce the number of GOT entries and GOT accesses on some targets. For example, the implementation of the following function "foo": static int a, b, c; int foo (void) { return a + b + c; } usually calculates the addresses of all three variables, but if you compile it with -fsection-anchors, it accesses the variables from a common anchor point instead. The effect is similar to the following pseudocode (which isn't valid C): int foo (void) { register int *xr = &x; return xr[&a - &x] + xr[&b - &x] + xr[&c - &x]; } Not all targets support this option. --param name=value In some places, GCC uses various constants to control the amount of optimization that is done. For example, GCC does not inline functions that contain more than a certain number of instructions. You can control some of these constants on the command line using the --param option. The names of specific parameters, and the meaning of the values, are tied to the internals of the compiler, and are subject to change without notice in future releases. In order to get minimal, maximal and default value of a parameter, one can use --help=param -Q options. In each case, the value is an integer. The allowable choices for name are: predictable-branch-outcome When branch is predicted to be taken with probability lower than this threshold (in percent), then it is considered well predictable. max-rtl-if-conversion-insns RTL if-conversion tries to remove conditional branches around a block and replace them with conditionally executed instructions. This parameter gives the maximum number of instructions in a block which should be considered for if-conversion. The compiler will also use other heuristics to decide whether if-conversion is likely to be profitable. max-rtl-if-conversion-predictable-cost max-rtl-if-conversion-unpredictable-cost RTL if-conversion will try to remove conditional branches around a block and replace them with conditionally executed instructions. These parameters give the maximum permissible cost for the sequence that would be generated by if-conversion depending on whether the branch is statically determined to be predictable or not. The units for this parameter are the same as those for the GCC internal seq_cost metric. The compiler will try to provide a reasonable default for this parameter using the BRANCH_COST target macro. max-crossjump-edges The maximum number of incoming edges to consider for cross-jumping. The algorithm used by -fcrossjumping is O(N^2) in the number of edges incoming to each block. Increasing values mean more aggressive optimization, making the compilation time increase with probably small improvement in executable size. min-crossjump-insns The minimum number of instructions that must be matched at the end of two blocks before cross-jumping is performed on them. This value is ignored in the case where all instructions in the block being cross-jumped from are matched. max-grow-copy-bb-insns The maximum code size expansion factor when copying basic blocks instead of jumping. The expansion is relative to a jump instruction. max-goto-duplication-insns The maximum number of instructions to duplicate to a block that jumps to a computed goto. To avoid O(N^2) behavior in a number of passes, GCC factors computed gotos early in the compilation process, and unfactors them as late as possible. Only computed jumps at the end of a basic blocks with no more than max-goto-duplication- insns are unfactored. max-delay-slot-insn-search The maximum number of instructions to consider when looking for an instruction to fill a delay slot. If more than this arbitrary number of instructions are searched, the time savings from filling the delay slot are minimal, so stop searching. Increasing values mean more aggressive optimization, making the compilation time increase with probably small improvement in execution time. max-delay-slot-live-search When trying to fill delay slots, the maximum number of instructions to consider when searching for a block with valid live register information. Increasing this arbitrarily chosen value means more aggressive optimization, increasing the compilation time. This parameter should be removed when the delay slot code is rewritten to maintain the control-flow graph. max-gcse-memory The approximate maximum amount of memory that can be allocated in order to perform the global common subexpression elimination optimization. If more memory than specified is required, the optimization is not done. max-gcse-insertion-ratio If the ratio of expression insertions to deletions is larger than this value for any expression, then RTL PRE inserts or removes the expression and thus leaves partially redundant computations in the instruction stream. max-pending-list-length The maximum number of pending dependencies scheduling allows before flushing the current state and starting over. Large functions with few branches or calls can create excessively large lists which needlessly consume memory and resources. max-modulo-backtrack-attempts The maximum number of backtrack attempts the scheduler should make when modulo scheduling a loop. Larger values can exponentially increase compilation time. max-inline-insns-single Several parameters control the tree inliner used in GCC. This number sets the maximum number of instructions (counted in GCC's internal representation) in a single function that the tree inliner considers for inlining. This only affects functions declared inline and methods implemented in a class declaration (C++). max-inline-insns-auto When you use -finline-functions (included in -O3), a lot of functions that would otherwise not be considered for inlining by the compiler are investigated. To those functions, a different (more restrictive) limit compared to functions declared inline can be applied. max-inline-insns-small This is bound applied to calls which are considered relevant with -finline-small-functions. max-inline-insns-size This is bound applied to calls which are optimized for size. Small growth may be desirable to anticipate optimization oppurtunities exposed by inlining. uninlined-function-insns Number of instructions accounted by inliner for function overhead such as function prologue and epilogue. uninlined-function-time Extra time accounted by inliner for function overhead such as time needed to execute function prologue and epilogue uninlined-thunk-insns uninlined-thunk-time Same as --param uninlined-function-insns and --param uninlined-function-time but applied to function thunks inline-min-speedup When estimated performance improvement of caller + callee runtime exceeds this threshold (in percent), the function can be inlined regardless of the limit on --param max- inline-insns-single and --param max-inline-insns-auto. large-function-insns The limit specifying really large functions. For functions larger than this limit after inlining, inlining is constrained by --param large-function-growth. This parameter is useful primarily to avoid extreme compilation time caused by non-linear algorithms used by the back end. large-function-growth Specifies maximal growth of large function caused by inlining in percents. For example, parameter value 100 limits large function growth to 2.0 times the original size. large-unit-insns The limit specifying large translation unit. Growth caused by inlining of units larger than this limit is limited by --param inline-unit-growth. For small units this might be too tight. For example, consider a unit consisting of function A that is inline and B that just calls A three times. If B is small relative to A, the growth of unit is 300\% and yet such inlining is very sane. For very large units consisting of small inlineable functions, however, the overall unit growth limit is needed to avoid exponential explosion of code size. Thus for smaller units, the size is increased to --param large-unit-insns before applying --param inline- unit-growth. inline-unit-growth Specifies maximal overall growth of the compilation unit caused by inlining. For example, parameter value 20 limits unit growth to 1.2 times the original size. Cold functions (either marked cold via an attribute or by profile feedback) are not accounted into the unit size. ipcp-unit-growth Specifies maximal overall growth of the compilation unit caused by interprocedural constant propagation. For example, parameter value 10 limits unit growth to 1.1 times the original size. large-stack-frame The limit specifying large stack frames. While inlining the algorithm is trying to not grow past this limit too much. large-stack-frame-growth Specifies maximal growth of large stack frames caused by inlining in percents. For example, parameter value 1000 limits large stack frame growth to 11 times the original size. max-inline-insns-recursive max-inline-insns-recursive-auto Specifies the maximum number of instructions an out-of- line copy of a self-recursive inline function can grow into by performing recursive inlining. --param max-inline-insns-recursive applies to functions declared inline. For functions not declared inline, recursive inlining happens only when -finline-functions (included in -O3) is enabled; --param max-inline-insns- recursive-auto applies instead. max-inline-recursive-depth max-inline-recursive-depth-auto Specifies the maximum recursion depth used for recursive inlining. --param max-inline-recursive-depth applies to functions declared inline. For functions not declared inline, recursive inlining happens only when -finline-functions (included in -O3) is enabled; --param max-inline- recursive-depth-auto applies instead. min-inline-recursive-probability Recursive inlining is profitable only for function having deep recursion in average and can hurt for function having little recursion depth by increasing the prologue size or complexity of function body to other optimizers. When profile feedback is available (see -fprofile-generate) the actual recursion depth can be guessed from the probability that function recurses via a given call expression. This parameter limits inlining only to call expressions whose probability exceeds the given threshold (in percents). early-inlining-insns Specify growth that the early inliner can make. In effect it increases the amount of inlining for code having a large abstraction penalty. max-early-inliner-iterations Limit of iterations of the early inliner. This basically bounds the number of nested indirect calls the early inliner can resolve. Deeper chains are still handled by late inlining. comdat-sharing-probability Probability (in percent) that C++ inline function with comdat visibility are shared across multiple compilation units. profile-func-internal-id A parameter to control whether to use function internal id in profile database lookup. If the value is 0, the compiler uses an id that is based on function assembler name and filename, which makes old profile data more tolerant to source changes such as function reordering etc. min-vect-loop-bound The minimum number of iterations under which loops are not vectorized when -ftree-vectorize is used. The number of iterations after vectorization needs to be greater than the value specified by this option to allow vectorization. gcse-cost-distance-ratio Scaling factor in calculation of maximum distance an expression can be moved by GCSE optimizations. This is currently supported only in the code hoisting pass. The bigger the ratio, the more aggressive code hoisting is with simple expressions, i.e., the expressions that have cost less than gcse-unrestricted-cost. Specifying 0 disables hoisting of simple expressions. gcse-unrestricted-cost Cost, roughly measured as the cost of a single typical machine instruction, at which GCSE optimizations do not constrain the distance an expression can travel. This is currently supported only in the code hoisting pass. The lesser the cost, the more aggressive code hoisting is. Specifying 0 allows all expressions to travel unrestricted distances. max-hoist-depth The depth of search in the dominator tree for expressions to hoist. This is used to avoid quadratic behavior in hoisting algorithm. The value of 0 does not limit on the search, but may slow down compilation of huge functions. max-tail-merge-comparisons The maximum amount of similar bbs to compare a bb with. This is used to avoid quadratic behavior in tree tail merging. max-tail-merge-iterations The maximum amount of iterations of the pass over the function. This is used to limit compilation time in tree tail merging. store-merging-allow-unaligned Allow the store merging pass to introduce unaligned stores if it is legal to do so. max-stores-to-merge The maximum number of stores to attempt to merge into wider stores in the store merging pass. max-unrolled-insns The maximum number of instructions that a loop may have to be unrolled. If a loop is unrolled, this parameter also determines how many times the loop code is unrolled. max-average-unrolled-insns The maximum number of instructions biased by probabilities of their execution that a loop may have to be unrolled. If a loop is unrolled, this parameter also determines how many times the loop code is unrolled. max-unroll-times The maximum number of unrollings of a single loop. max-peeled-insns The maximum number of instructions that a loop may have to be peeled. If a loop is peeled, this parameter also determines how many times the loop code is peeled. max-peel-times The maximum number of peelings of a single loop. max-peel-branches The maximum number of branches on the hot path through the peeled sequence. max-completely-peeled-insns The maximum number of insns of a completely peeled loop. max-completely-peel-times The maximum number of iterations of a loop to be suitable for complete peeling. max-completely-peel-loop-nest-depth The maximum depth of a loop nest suitable for complete peeling. max-unswitch-insns The maximum number of insns of an unswitched loop. max-unswitch-level The maximum number of branches unswitched in a single loop. lim-expensive The minimum cost of an expensive expression in the loop invariant motion. iv-consider-all-candidates-bound Bound on number of candidates for induction variables, below which all candidates are considered for each use in induction variable optimizations. If there are more candidates than this, only the most relevant ones are considered to avoid quadratic time complexity. iv-max-considered-uses The induction variable optimizations give up on loops that contain more induction variable uses. iv-always-prune-cand-set-bound If the number of candidates in the set is smaller than this value, always try to remove unnecessary ivs from the set when adding a new one. avg-loop-niter Average number of iterations of a loop. dse-max-object-size Maximum size (in bytes) of objects tracked bytewise by dead store elimination. Larger values may result in larger compilation times. dse-max-alias-queries-per-store Maximum number of queries into the alias oracle per store. Larger values result in larger compilation times and may result in more removed dead stores. scev-max-expr-size Bound on size of expressions used in the scalar evolutions analyzer. Large expressions slow the analyzer. scev-max-expr-complexity Bound on the complexity of the expressions in the scalar evolutions analyzer. Complex expressions slow the analyzer. max-tree-if-conversion-phi-args Maximum number of arguments in a PHI supported by TREE if conversion unless the loop is marked with simd pragma. vect-max-version-for-alignment-checks The maximum number of run-time checks that can be performed when doing loop versioning for alignment in the vectorizer. vect-max-version-for-alias-checks The maximum number of run-time checks that can be performed when doing loop versioning for alias in the vectorizer. vect-max-peeling-for-alignment The maximum number of loop peels to enhance access alignment for vectorizer. Value -1 means no limit. max-iterations-to-track The maximum number of iterations of a loop the brute- force algorithm for analysis of the number of iterations of the loop tries to evaluate. hot-bb-count-ws-permille A basic block profile count is considered hot if it contributes to the given permillage (i.e. 0...1000) of the entire profiled execution. hot-bb-frequency-fraction Select fraction of the entry block frequency of executions of basic block in function given basic block needs to have to be considered hot. max-predicted-iterations The maximum number of loop iterations we predict statically. This is useful in cases where a function contains a single loop with known bound and another loop with unknown bound. The known number of iterations is predicted correctly, while the unknown number of iterations average to roughly 10. This means that the loop without bounds appears artificially cold relative to the other one. builtin-expect-probability Control the probability of the expression having the specified value. This parameter takes a percentage (i.e. 0 ... 100) as input. builtin-string-cmp-inline-length The maximum length of a constant string for a builtin string cmp call eligible for inlining. align-threshold Select fraction of the maximal frequency of executions of a basic block in a function to align the basic block. align-loop-iterations A loop expected to iterate at least the selected number of iterations is aligned. tracer-dynamic-coverage tracer-dynamic-coverage-feedback This value is used to limit superblock formation once the given percentage of executed instructions is covered. This limits unnecessary code size expansion. The tracer-dynamic-coverage-feedback parameter is used only when profile feedback is available. The real profiles (as opposed to statically estimated ones) are much less balanced allowing the threshold to be larger value. tracer-max-code-growth Stop tail duplication once code growth has reached given percentage. This is a rather artificial limit, as most of the duplicates are eliminated later in cross jumping, so it may be set to much higher values than is the desired code growth. tracer-min-branch-ratio Stop reverse growth when the reverse probability of best edge is less than this threshold (in percent). tracer-min-branch-probability tracer-min-branch-probability-feedback Stop forward growth if the best edge has probability lower than this threshold. Similarly to tracer-dynamic-coverage two parameters are provided. tracer-min-branch-probability-feedback is used for compilation with profile feedback and tracer-min- branch-probability compilation without. The value for compilation with profile feedback needs to be more conservative (higher) in order to make tracer effective. stack-clash-protection-guard-size Specify the size of the operating system provided stack guard as 2 raised to num bytes. Higher values may reduce the number of explicit probes, but a value larger than the operating system provided guard will leave code vulnerable to stack clash style attacks. stack-clash-protection-probe-interval Stack clash protection involves probing stack space as it is allocated. This param controls the maximum distance between probes into the stack as 2 raised to num bytes. Higher values may reduce the number of explicit probes, but a value larger than the operating system provided guard will leave code vulnerable to stack clash style attacks. max-cse-path-length The maximum number of basic blocks on path that CSE considers. max-cse-insns The maximum number of instructions CSE processes before flushing. ggc-min-expand GCC uses a garbage collector to manage its own memory allocation. This parameter specifies the minimum percentage by which the garbage collector's heap should be allowed to expand between collections. Tuning this may improve compilation speed; it has no effect on code generation. The default is 30% + 70% * (RAM/1GB) with an upper bound of 100% when RAM >= 1GB. If "getrlimit" is available, the notion of "RAM" is the smallest of actual RAM and "RLIMIT_DATA" or "RLIMIT_AS". If GCC is not able to calculate RAM on a particular platform, the lower bound of 30% is used. Setting this parameter and ggc-min- heapsize to zero causes a full collection to occur at every opportunity. This is extremely slow, but can be useful for debugging. ggc-min-heapsize Minimum size of the garbage collector's heap before it begins bothering to collect garbage. The first collection occurs after the heap expands by ggc-min- expand% beyond ggc-min-heapsize. Again, tuning this may improve compilation speed, and has no effect on code generation. The default is the smaller of RAM/8, RLIMIT_RSS, or a limit that tries to ensure that RLIMIT_DATA or RLIMIT_AS are not exceeded, but with a lower bound of 4096 (four megabytes) and an upper bound of 131072 (128 megabytes). If GCC is not able to calculate RAM on a particular platform, the lower bound is used. Setting this parameter very large effectively disables garbage collection. Setting this parameter and ggc-min-expand to zero causes a full collection to occur at every opportunity. max-reload-search-insns The maximum number of instruction reload should look backward for equivalent register. Increasing values mean more aggressive optimization, making the compilation time increase with probably slightly better performance. max-cselib-memory-locations The maximum number of memory locations cselib should take into account. Increasing values mean more aggressive optimization, making the compilation time increase with probably slightly better performance. max-sched-ready-insns The maximum number of instructions ready to be issued the scheduler should consider at any given time during the first scheduling pass. Increasing values mean more thorough searches, making the compilation time increase with probably little benefit. max-sched-region-blocks The maximum number of blocks in a region to be considered for interblock scheduling. max-pipeline-region-blocks The maximum number of blocks in a region to be considered for pipelining in the selective scheduler. max-sched-region-insns The maximum number of insns in a region to be considered for interblock scheduling. max-pipeline-region-insns The maximum number of insns in a region to be considered for pipelining in the selective scheduler. min-spec-prob The minimum probability (in percents) of reaching a source block for interblock speculative scheduling. max-sched-extend-regions-iters The maximum number of iterations through CFG to extend regions. A value of 0 disables region extensions. max-sched-insn-conflict-delay The maximum conflict delay for an insn to be considered for speculative motion. sched-spec-prob-cutoff The minimal probability of speculation success (in percents), so that speculative insns are scheduled. sched-state-edge-prob-cutoff The minimum probability an edge must have for the scheduler to save its state across it. sched-mem-true-dep-cost Minimal distance (in CPU cycles) between store and load targeting same memory locations. selsched-max-lookahead The maximum size of the lookahead window of selective scheduling. It is a depth of search for available instructions. selsched-max-sched-times The maximum number of times that an instruction is scheduled during selective scheduling. This is the limit on the number of iterations through which the instruction may be pipelined. selsched-insns-to-rename The maximum number of best instructions in the ready list that are considered for renaming in the selective scheduler. sms-min-sc The minimum value of stage count that swing modulo scheduler generates. max-last-value-rtl The maximum size measured as number of RTLs that can be recorded in an expression in combiner for a pseudo register as last known value of that register. max-combine-insns The maximum number of instructions the RTL combiner tries to combine. integer-share-limit Small integer constants can use a shared data structure, reducing the compiler's memory usage and increasing its speed. This sets the maximum value of a shared integer constant. ssp-buffer-size The minimum size of buffers (i.e. arrays) that receive stack smashing protection when -fstack-protection is used. min-size-for-stack-sharing The minimum size of variables taking part in stack slot sharing when not optimizing. max-jump-thread-duplication-stmts Maximum number of statements allowed in a block that needs to be duplicated when threading jumps. max-fields-for-field-sensitive Maximum number of fields in a structure treated in a field sensitive manner during pointer analysis. prefetch-latency Estimate on average number of instructions that are executed before prefetch finishes. The distance prefetched ahead is proportional to this constant. Increasing this number may also lead to less streams being prefetched (see simultaneous-prefetches). simultaneous-prefetches Maximum number of prefetches that can run at the same time. l1-cache-line-size The size of cache line in L1 data cache, in bytes. l1-cache-size The size of L1 data cache, in kilobytes. l2-cache-size The size of L2 data cache, in kilobytes. prefetch-dynamic-strides Whether the loop array prefetch pass should issue software prefetch hints for strides that are non- constant. In some cases this may be beneficial, though the fact the stride is non-constant may make it hard to predict when there is clear benefit to issuing these hints. Set to 1 if the prefetch hints should be issued for non- constant strides. Set to 0 if prefetch hints should be issued only for strides that are known to be constant and below prefetch-minimum-stride. prefetch-minimum-stride Minimum constant stride, in bytes, to start using prefetch hints for. If the stride is less than this threshold, prefetch hints will not be issued. This setting is useful for processors that have hardware prefetchers, in which case there may be conflicts between the hardware prefetchers and the software prefetchers. If the hardware prefetchers have a maximum stride they can handle, it should be used here to improve the use of software prefetchers. A value of -1 means we don't have a threshold and therefore prefetch hints can be issued for any constant stride. This setting is only useful for strides that are known and constant. loop-interchange-max-num-stmts The maximum number of stmts in a loop to be interchanged. loop-interchange-stride-ratio The minimum ratio between stride of two loops for interchange to be profitable. min-insn-to-prefetch-ratio The minimum ratio between the number of instructions and the number of prefetches to enable prefetching in a loop. prefetch-min-insn-to-mem-ratio The minimum ratio between the number of instructions and the number of memory references to enable prefetching in a loop. use-canonical-types Whether the compiler should use the "canonical" type system. Should always be 1, which uses a more efficient internal mechanism for comparing types in C++ and Objective-C++. However, if bugs in the canonical type system are causing compilation failures, set this value to 0 to disable canonical types. switch-conversion-max-branch-ratio Switch initialization conversion refuses to create arrays that are bigger than switch-conversion-max-branch-ratio times the number of branches in the switch. max-partial-antic-length Maximum length of the partial antic set computed during the tree partial redundancy elimination optimization (-ftree-pre) when optimizing at -O3 and above. For some sorts of source code the enhanced partial redundancy elimination optimization can run away, consuming all of the memory available on the host machine. This parameter sets a limit on the length of the sets that are computed, which prevents the runaway behavior. Setting a value of 0 for this parameter allows an unlimited set length. rpo-vn-max-loop-depth Maximum loop depth that is value-numbered optimistically. When the limit hits the innermost rpo-vn-max-loop-depth loops and the outermost loop in the loop nest are value- numbered optimistically and the remaining ones not. sccvn-max-alias-queries-per-access Maximum number of alias-oracle queries we perform when looking for redundancies for loads and stores. If this limit is hit the search is aborted and the load or store is not considered redundant. The number of queries is algorithmically limited to the number of stores on all paths from the load to the function entry. ira-max-loops-num IRA uses regional register allocation by default. If a function contains more loops than the number given by this parameter, only at most the given number of the most frequently-executed loops form regions for regional register allocation. ira-max-conflict-table-size Although IRA uses a sophisticated algorithm to compress the conflict table, the table can still require excessive amounts of memory for huge functions. If the conflict table for a function could be more than the size in MB given by this parameter, the register allocator instead uses a faster, simpler, and lower-quality algorithm that does not require building a pseudo-register conflict table. ira-loop-reserved-regs IRA can be used to evaluate more accurate register pressure in loops for decisions to move loop invariants (see -O3). The number of available registers reserved for some other purposes is given by this parameter. Default of the parameter is the best found from numerous experiments. lra-inheritance-ebb-probability-cutoff LRA tries to reuse values reloaded in registers in subsequent insns. This optimization is called inheritance. EBB is used as a region to do this optimization. The parameter defines a minimal fall- through edge probability in percentage used to add BB to inheritance EBB in LRA. The default value was chosen from numerous runs of SPEC2000 on x86-64. loop-invariant-max-bbs-in-loop Loop invariant motion can be very expensive, both in compilation time and in amount of needed compile-time memory, with very large loops. Loops with more basic blocks than this parameter won't have loop invariant motion optimization performed on them. loop-max-datarefs-for-datadeps Building data dependencies is expensive for very large loops. This parameter limits the number of data references in loops that are considered for data dependence analysis. These large loops are no handled by the optimizations using loop data dependencies. max-vartrack-size Sets a maximum number of hash table slots to use during variable tracking dataflow analysis of any function. If this limit is exceeded with variable tracking at assignments enabled, analysis for that function is retried without it, after removing all debug insns from the function. If the limit is exceeded even without debug insns, var tracking analysis is completely disabled for the function. Setting the parameter to zero makes it unlimited. max-vartrack-expr-depth Sets a maximum number of recursion levels when attempting to map variable names or debug temporaries to value expressions. This trades compilation time for more complete debug information. If this is set too low, value expressions that are available and could be represented in debug information may end up not being used; setting this higher may enable the compiler to find more complex debug expressions, but compile time and memory use may grow. max-debug-marker-count Sets a threshold on the number of debug markers (e.g. begin stmt markers) to avoid complexity explosion at inlining or expanding to RTL. If a function has more such gimple stmts than the set limit, such stmts will be dropped from the inlined copy of a function, and from its RTL expansion. min-nondebug-insn-uid Use uids starting at this parameter for nondebug insns. The range below the parameter is reserved exclusively for debug insns created by -fvar-tracking-assignments, but debug insns may get (non-overlapping) uids above it if the reserved range is exhausted. ipa-sra-ptr-growth-factor IPA-SRA replaces a pointer to an aggregate with one or more new parameters only when their cumulative size is less or equal to ipa-sra-ptr-growth-factor times the size of the original pointer parameter. sra-max-scalarization-size-Ospeed sra-max-scalarization-size-Osize The two Scalar Reduction of Aggregates passes (SRA and IPA-SRA) aim to replace scalar parts of aggregates with uses of independent scalar variables. These parameters control the maximum size, in storage units, of aggregate which is considered for replacement when compiling for speed (sra-max-scalarization-size-Ospeed) or size (sra- max-scalarization-size-Osize) respectively. sra-max-propagations The maximum number of artificial accesses that Scalar Replacement of Aggregates (SRA) will track, per one local variable, in order to facilitate copy propagation. tm-max-aggregate-size When making copies of thread-local variables in a transaction, this parameter specifies the size in bytes after which variables are saved with the logging functions as opposed to save/restore code sequence pairs. This option only applies when using -fgnu-tm. graphite-max-nb-scop-params To avoid exponential effects in the Graphite loop transforms, the number of parameters in a Static Control Part (SCoP) is bounded. A value of zero can be used to lift the bound. A variable whose value is unknown at compilation time and defined outside a SCoP is a parameter of the SCoP. loop-block-tile-size Loop blocking or strip mining transforms, enabled with -floop-block or -floop-strip-mine, strip mine each loop in the loop nest by a given number of iterations. The strip length can be changed using the loop-block-tile- size parameter. ipa-cp-value-list-size IPA-CP attempts to track all possible values and types passed to a function's parameter in order to propagate them and perform devirtualization. ipa-cp-value-list- size is the maximum number of values and types it stores per one formal parameter of a function. ipa-cp-eval-threshold IPA-CP calculates its own score of cloning profitability heuristics and performs those cloning opportunities with scores that exceed ipa-cp-eval-threshold. ipa-cp-recursion-penalty Percentage penalty the recursive functions will receive when they are evaluated for cloning. ipa-cp-single-call-penalty Percentage penalty functions containing a single call to another function will receive when they are evaluated for cloning. ipa-max-agg-items IPA-CP is also capable to propagate a number of scalar values passed in an aggregate. ipa-max-agg-items controls the maximum number of such values per one parameter. ipa-cp-loop-hint-bonus When IPA-CP determines that a cloning candidate would make the number of iterations of a loop known, it adds a bonus of ipa-cp-loop-hint-bonus to the profitability score of the candidate. ipa-cp-array-index-hint-bonus When IPA-CP determines that a cloning candidate would make the index of an array access known, it adds a bonus of ipa-cp-array-index-hint-bonus to the profitability score of the candidate. ipa-max-aa-steps During its analysis of function bodies, IPA-CP employs alias analysis in order to track values pointed to by function parameters. In order not spend too much time analyzing huge functions, it gives up and consider all memory clobbered after examining ipa-max-aa-steps statements modifying memory. lto-partitions Specify desired number of partitions produced during WHOPR compilation. The number of partitions should exceed the number of CPUs used for compilation. lto-min-partition Size of minimal partition for WHOPR (in estimated instructions). This prevents expenses of splitting very small programs into too many partitions. lto-max-partition Size of max partition for WHOPR (in estimated instructions). to provide an upper bound for individual size of partition. Meant to be used only with balanced partitioning. lto-max-streaming-parallelism Maximal number of parallel processes used for LTO streaming. cxx-max-namespaces-for-diagnostic-help The maximum number of namespaces to consult for suggestions when C++ name lookup fails for an identifier. sink-frequency-threshold The maximum relative execution frequency (in percents) of the target block relative to a statement's original block to allow statement sinking of a statement. Larger numbers result in more aggressive statement sinking. A small positive adjustment is applied for statements with memory operands as those are even more profitable so sink. max-stores-to-sink The maximum number of conditional store pairs that can be sunk. Set to 0 if either vectorization (-ftree-vectorize) or if-conversion (-ftree-loop-if-convert) is disabled. allow-store-data-races Allow optimizers to introduce new data races on stores. Set to 1 to allow, otherwise to 0. case-values-threshold The smallest number of different values for which it is best to use a jump-table instead of a tree of conditional branches. If the value is 0, use the default for the machine. tree-reassoc-width Set the maximum number of instructions executed in parallel in reassociated tree. This parameter overrides target dependent heuristics used by default if has non zero value. sched-pressure-algorithm Choose between the two available implementations of -fsched-pressure. Algorithm 1 is the original implementation and is the more likely to prevent instructions from being reordered. Algorithm 2 was designed to be a compromise between the relatively conservative approach taken by algorithm 1 and the rather aggressive approach taken by the default scheduler. It relies more heavily on having a regular register file and accurate register pressure classes. See haifa-sched.c in the GCC sources for more details. The default choice depends on the target. max-slsr-cand-scan Set the maximum number of existing candidates that are considered when seeking a basis for a new straight-line strength reduction candidate. asan-globals Enable buffer overflow detection for global objects. This kind of protection is enabled by default if you are using -fsanitize=address option. To disable global objects protection use --param asan-globals=0. asan-stack Enable buffer overflow detection for stack objects. This kind of protection is enabled by default when using -fsanitize=address. To disable stack protection use --param asan-stack=0 option. asan-instrument-reads Enable buffer overflow detection for memory reads. This kind of protection is enabled by default when using -fsanitize=address. To disable memory reads protection use --param asan-instrument-reads=0. asan-instrument-writes Enable buffer overflow detection for memory writes. This kind of protection is enabled by default when using -fsanitize=address. To disable memory writes protection use --param asan-instrument-writes=0 option. asan-memintrin Enable detection for built-in functions. This kind of protection is enabled by default when using -fsanitize=address. To disable built-in functions protection use --param asan-memintrin=0. asan-use-after-return Enable detection of use-after-return. This kind of protection is enabled by default when using the -fsanitize=address option. To disable it use --param asan-use-after-return=0. Note: By default the check is disabled at run time. To enable it, add "detect_stack_use_after_return=1" to the environment variable ASAN_OPTIONS. asan-instrumentation-with-call-threshold If number of memory accesses in function being instrumented is greater or equal to this number, use callbacks instead of inline checks. E.g. to disable inline code use --param asan-instrumentation-with-call-threshold=0. use-after-scope-direct-emission-threshold If the size of a local variable in bytes is smaller or equal to this number, directly poison (or unpoison) shadow memory instead of using run-time callbacks. max-fsm-thread-path-insns Maximum number of instructions to copy when duplicating blocks on a finite state automaton jump thread path. max-fsm-thread-length Maximum number of basic blocks on a finite state automaton jump thread path. max-fsm-thread-paths Maximum number of new jump thread paths to create for a finite state automaton. parloops-chunk-size Chunk size of omp schedule for loops parallelized by parloops. parloops-schedule Schedule type of omp schedule for loops parallelized by parloops (static, dynamic, guided, auto, runtime). parloops-min-per-thread The minimum number of iterations per thread of an innermost parallelized loop for which the parallelized variant is preferred over the single threaded one. Note that for a parallelized loop nest the minimum number of iterations of the outermost loop per thread is two. max-ssa-name-query-depth Maximum depth of recursion when querying properties of SSA names in things like fold routines. One level of recursion corresponds to following a use-def chain. hsa-gen-debug-stores Enable emission of special debug stores within HSA kernels which are then read and reported by libgomp plugin. Generation of these stores is disabled by default, use --param hsa-gen-debug-stores=1 to enable it. max-speculative-devirt-maydefs The maximum number of may-defs we analyze when looking for a must-def specifying the dynamic type of an object that invokes a virtual call we may be able to devirtualize speculatively. max-vrp-switch-assertions The maximum number of assertions to add along the default edge of a switch statement during VRP. unroll-jam-min-percent The minimum percentage of memory references that must be optimized away for the unroll-and-jam transformation to be considered profitable. unroll-jam-max-unroll The maximum number of times the outer loop should be unrolled by the unroll-and-jam transformation. max-rtl-if-conversion-unpredictable-cost Maximum permissible cost for the sequence that would be generated by the RTL if-conversion pass for a branch that is considered unpredictable. max-variable-expansions-in-unroller If -fvariable-expansion-in-unroller is used, the maximum number of times that an individual variable will be expanded during loop unrolling. tracer-min-branch-probability-feedback Stop forward growth if the probability of best edge is less than this threshold (in percent). Used when profile feedback is available. partial-inlining-entry-probability Maximum probability of the entry BB of split region (in percent relative to entry BB of the function) to make partial inlining happen. max-tracked-strlens Maximum number of strings for which strlen optimization pass will track string lengths. gcse-after-reload-partial-fraction The threshold ratio for performing partial redundancy elimination after reload. gcse-after-reload-critical-fraction The threshold ratio of critical edges execution count that permit performing redundancy elimination after reload. max-loop-header-insns The maximum number of insns in loop header duplicated by the copy loop headers pass. vect-epilogues-nomask Enable loop epilogue vectorization using smaller vector size. slp-max-insns-in-bb Maximum number of instructions in basic block to be considered for SLP vectorization. avoid-fma-max-bits Maximum number of bits for which we avoid creating FMAs. sms-loop-average-count-threshold A threshold on the average loop count considered by the swing modulo scheduler. sms-dfa-history The number of cycles the swing modulo scheduler considers when checking conflicts using DFA. hot-bb-count-fraction Select fraction of the maximal count of repetitions of basic block in program given basic block needs to have to be considered hot (used in non-LTO mode) max-inline-insns-recursive-auto The maximum number of instructions non-inline function can grow to via recursive inlining. graphite-allow-codegen-errors Whether codegen errors should be ICEs when -fchecking. sms-max-ii-factor A factor for tuning the upper bound that swing modulo scheduler uses for scheduling a loop. lra-max-considered-reload-pseudos The max number of reload pseudos which are considered during spilling a non-reload pseudo. max-pow-sqrt-depth Maximum depth of sqrt chains to use when synthesizing exponentiation by a real constant. max-dse-active-local-stores Maximum number of active local stores in RTL dead store elimination. asan-instrument-allocas Enable asan allocas/VLAs protection. max-iterations-computation-cost Bound on the cost of an expression to compute the number of iterations. max-isl-operations Maximum number of isl operations, 0 means unlimited. graphite-max-arrays-per-scop Maximum number of arrays per scop. max-vartrack-reverse-op-size Max. size of loc list for which reverse ops should be added. unlikely-bb-count-fraction The minimum fraction of profile runs a given basic block execution count must be not to be considered unlikely. tracer-dynamic-coverage-feedback The percentage of function, weighted by execution frequency, that must be covered by trace formation. Used when profile feedback is available. max-inline-recursive-depth-auto The maximum depth of recursive inlining for non-inline functions. fsm-scale-path-stmts Scale factor to apply to the number of statements in a threading path when comparing to the number of (scaled) blocks. fsm-maximum-phi-arguments Maximum number of arguments a PHI may have before the FSM threader will not try to thread through its block. uninit-control-dep-attempts Maximum number of nested calls to search for control dependencies during uninitialized variable analysis. indir-call-topn-profile Track top N target addresses in indirect-call profile. max-once-peeled-insns The maximum number of insns of a peeled loop that rolls only once. sra-max-scalarization-size-Osize Maximum size, in storage units, of an aggregate which should be considered for scalarization when compiling for size. fsm-scale-path-blocks Scale factor to apply to the number of blocks in a threading path when comparing to the number of (scaled) statements. sched-autopref-queue-depth Hardware autoprefetcher scheduler model control flag. Number of lookahead cycles the model looks into; at ' ' only enable instruction sorting heuristic. loop-versioning-max-inner-insns The maximum number of instructions that an inner loop can have before the loop versioning pass considers it too big to copy. loop-versioning-max-outer-insns The maximum number of instructions that an outer loop can have before the loop versioning pass considers it too big to copy, discounting any instructions in inner loops that directly benefit from versioning. ssa-name-def-chain-limit The maximum number of SSA_NAME assignments to follow in determining a property of a variable such as its value. This limits the number of iterations or recursive calls GCC performs when optimizing certain statements or when determining their validity prior to issuing diagnostics. Program Instrumentation Options GCC supports a number of command-line options that control adding run-time instrumentation to the code it normally generates. For example, one purpose of instrumentation is collect profiling statistics for use in finding program hot spots, code coverage analysis, or profile-guided optimizations. Another class of program instrumentation is adding run-time checking to detect programming errors like invalid pointer dereferences or out-of- bounds array accesses, as well as deliberately hostile attacks such as stack smashing or C++ vtable hijacking. There is also a general hook which can be used to implement other forms of tracing or function-level instrumentation for debug or program analysis purposes. -p -pg Generate extra code to write profile information suitable for the analysis program prof (for -p) or gprof (for -pg). You must use this option when compiling the source files you want data about, and you must also use it when linking. You can use the function attribute "no_instrument_function" to suppress profiling of individual functions when compiling with these options. -fprofile-arcs Add code so that program flow arcs are instrumented. During execution the program records how many times each branch and call is executed and how many times it is taken or returns. On targets that support constructors with priority support, profiling properly handles constructors, destructors and C++ constructors (and destructors) of classes which are used as a type of a global variable. When the compiled program exits it saves this data to a file called auxname.gcda for each source file. The data may be used for profile-directed optimizations (-fbranch-probabilities), or for test coverage analysis (-ftest-coverage). Each object file's auxname is generated from the name of the output file, if explicitly specified and it is not the final executable, otherwise it is the basename of the source file. In both cases any suffix is removed (e.g. foo.gcda for input file dir/foo.c, or dir/foo.gcda for output file specified as -o dir/foo.o). --coverage This option is used to compile and link code instrumented for coverage analysis. The option is a synonym for -fprofile-arcs -ftest-coverage (when compiling) and -lgcov (when linking). See the documentation for those options for more details. * Compile the source files with -fprofile-arcs plus optimization and code generation options. For test coverage analysis, use the additional -ftest-coverage option. You do not need to profile every source file in a program. * Compile the source files additionally with -fprofile-abs-path to create absolute path names in the .gcno files. This allows gcov to find the correct sources in projects where compilations occur with different working directories. * Link your object files with -lgcov or -fprofile-arcs (the latter implies the former). * Run the program on a representative workload to generate the arc profile information. This may be repeated any number of times. You can run concurrent instances of your program, and provided that the file system supports locking, the data files will be correctly updated. Unless a strict ISO C dialect option is in effect, "fork" calls are detected and correctly handled without double counting. * For profile-directed optimizations, compile the source files again with the same optimization and code generation options plus -fbranch-probabilities. * For test coverage analysis, use gcov to produce human readable information from the .gcno and .gcda files. Refer to the gcov documentation for further information. With -fprofile-arcs, for each function of your program GCC creates a program flow graph, then finds a spanning tree for the graph. Only arcs that are not on the spanning tree have to be instrumented: the compiler adds code to count the number of times that these arcs are executed. When an arc is the only exit or only entrance to a block, the instrumentation code can be added to the block; otherwise, a new basic block must be created to hold the instrumentation code. -ftest-coverage Produce a notes file that the gcov code-coverage utility can use to show program coverage. Each source file's note file is called auxname.gcno. Refer to the -fprofile-arcs option above for a description of auxname and instructions on how to generate test coverage data. Coverage data matches the source files more closely if you do not optimize. -fprofile-abs-path Automatically convert relative source file names to absolute path names in the .gcno files. This allows gcov to find the correct sources in projects where compilations occur with different working directories. -fprofile-dir=path Set the directory to search for the profile data files in to path. This option affects only the profile data generated by -fprofile-generate, -ftest-coverage, -fprofile-arcs and used by -fprofile-use and -fbranch-probabilities and its related options. Both absolute and relative paths can be used. By default, GCC uses the current directory as path, thus the profile data file appears in the same directory as the object file. In order to prevent the file name clashing, if the object file name is not an absolute path, we mangle the absolute path of the sourcename.gcda file and use it as the file name of a .gcda file. When an executable is run in a massive parallel environment, it is recommended to save profile to different folders. That can be done with variables in path that are exported during run-time: %p process ID. %q{VAR} value of environment variable VAR -fprofile-generate -fprofile-generate=path Enable options usually used for instrumenting application to produce profile useful for later recompilation with profile feedback based optimization. You must use -fprofile-generate both when compiling and when linking your program. The following options are enabled: -fprofile-arcs, -fprofile-values, -finline-functions, and -fipa-bit-cp. If path is specified, GCC looks at the path to find the profile feedback data files. See -fprofile-dir. To optimize the program based on the collected profile information, use -fprofile-use. -fprofile-update=method Alter the update method for an application instrumented for profile feedback based optimization. The method argument should be one of single, atomic or prefer-atomic. The first one is useful for single-threaded applications, while the second one prevents profile corruption by emitting thread- safe code. Warning: When an application does not properly join all threads (or creates an detached thread), a profile file can be still corrupted. Using prefer-atomic would be transformed either to atomic, when supported by a target, or to single otherwise. The GCC driver automatically selects prefer-atomic when -pthread is present in the command line. -fprofile-filter-files=regex Instrument only functions from files where names match any regular expression (separated by a semi-colon). For example, -fprofile-filter-files=main.c;module.*.c will instrument only main.c and all C files starting with 'module'. -fprofile-exclude-files=regex Instrument only functions from files where names do not match all the regular expressions (separated by a semi-colon). For example, -fprofile-exclude-files=/usr/* will prevent instrumentation of all files that are located in /usr/ folder. -fsanitize=address Enable AddressSanitizer, a fast memory error detector. Memory access instructions are instrumented to detect out-of- bounds and use-after-free bugs. The option enables -fsanitize-address-use-after-scope. See <https://github.com/google/sanitizers/wiki/AddressSanitizer > for more details. The run-time behavior can be influenced using the ASAN_OPTIONS environment variable. When set to "help=1", the available options are shown at startup of the instrumented program. See <https://github.com/google/sanitizers/wiki/AddressSanitizerFlags#run-time-flags > for a list of supported options. The option cannot be combined with -fsanitize=thread. -fsanitize=kernel-address Enable AddressSanitizer for Linux kernel. See <https://github.com/google/kasan/wiki > for more details. -fsanitize=pointer-compare Instrument comparison operation (<, <=, >, >=) with pointer operands. The option must be combined with either -fsanitize=kernel-address or -fsanitize=address The option cannot be combined with -fsanitize=thread. Note: By default the check is disabled at run time. To enable it, add "detect_invalid_pointer_pairs=2" to the environment variable ASAN_OPTIONS. Using "detect_invalid_pointer_pairs=1" detects invalid operation only when both pointers are non-null. -fsanitize=pointer-subtract Instrument subtraction with pointer operands. The option must be combined with either -fsanitize=kernel-address or -fsanitize=address The option cannot be combined with -fsanitize=thread. Note: By default the check is disabled at run time. To enable it, add "detect_invalid_pointer_pairs=2" to the environment variable ASAN_OPTIONS. Using "detect_invalid_pointer_pairs=1" detects invalid operation only when both pointers are non-null. -fsanitize=thread Enable ThreadSanitizer, a fast data race detector. Memory access instructions are instrumented to detect data race bugs. See <https://github.com/google/sanitizers/wiki#threadsanitizer > for more details. The run-time behavior can be influenced using the TSAN_OPTIONS environment variable; see <https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags > for a list of supported options. The option cannot be combined with -fsanitize=address, -fsanitize=leak. Note that sanitized atomic builtins cannot throw exceptions when operating on invalid memory addresses with non-call exceptions (-fnon-call-exceptions). -fsanitize=leak Enable LeakSanitizer, a memory leak detector. This option only matters for linking of executables and the executable is linked against a library that overrides "malloc" and other allocator functions. See <https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer > for more details. The run-time behavior can be influenced using the LSAN_OPTIONS environment variable. The option cannot be combined with -fsanitize=thread. -fsanitize=undefined Enable UndefinedBehaviorSanitizer, a fast undefined behavior detector. Various computations are instrumented to detect undefined behavior at runtime. See <https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html > for more details. The run-time behavior can be influenced using the UBSAN_OPTIONS environment variable. Current suboptions are: -fsanitize=shift This option enables checking that the result of a shift operation is not undefined. Note that what exactly is considered undefined differs slightly between C and C++, as well as between ISO C90 and C99, etc. This option has two suboptions, -fsanitize=shift-base and -fsanitize=shift-exponent. -fsanitize=shift-exponent This option enables checking that the second argument of a shift operation is not negative and is smaller than the precision of the promoted first argument. -fsanitize=shift-base If the second argument of a shift operation is within range, check that the result of a shift operation is not undefined. Note that what exactly is considered undefined differs slightly between C and C++, as well as between ISO C90 and C99, etc. -fsanitize=integer-divide-by-zero Detect integer division by zero as well as "INT_MIN / -1" division. -fsanitize=unreachable With this option, the compiler turns the "__builtin_unreachable" call into a diagnostics message call instead. When reaching the "__builtin_unreachable" call, the behavior is undefined. -fsanitize=vla-bound This option instructs the compiler to check that the size of a variable length array is positive. -fsanitize=null This option enables pointer checking. Particularly, the application built with this option turned on will issue an error message when it tries to dereference a NULL pointer, or if a reference (possibly an rvalue reference) is bound to a NULL pointer, or if a method is invoked on an object pointed by a NULL pointer. -fsanitize=return This option enables return statement checking. Programs built with this option turned on will issue an error message when the end of a non-void function is reached without actually returning a value. This option works in C++ only. -fsanitize=signed-integer-overflow This option enables signed integer overflow checking. We check that the result of "+", "*", and both unary and binary "-" does not overflow in the signed arithmetics. Note, integer promotion rules must be taken into account. That is, the following is not an overflow: signed char a = SCHAR_MAX; a++; -fsanitize=bounds This option enables instrumentation of array bounds. Various out of bounds accesses are detected. Flexible array members, flexible array member-like arrays, and initializers of variables with static storage are not instrumented. -fsanitize=bounds-strict This option enables strict instrumentation of array bounds. Most out of bounds accesses are detected, including flexible array members and flexible array member-like arrays. Initializers of variables with static storage are not instrumented. -fsanitize=alignment This option enables checking of alignment of pointers when they are dereferenced, or when a reference is bound to insufficiently aligned target, or when a method or constructor is invoked on insufficiently aligned object. -fsanitize=object-size This option enables instrumentation of memory references using the "__builtin_object_size" function. Various out of bounds pointer accesses are detected. -fsanitize=float-divide-by-zero Detect floating-point division by zero. Unlike other similar options, -fsanitize=float-divide-by-zero is not enabled by -fsanitize=undefined, since floating-point division by zero can be a legitimate way of obtaining infinities and NaNs. -fsanitize=float-cast-overflow This option enables floating-point type to integer conversion checking. We check that the result of the conversion does not overflow. Unlike other similar options, -fsanitize=float-cast-overflow is not enabled by -fsanitize=undefined. This option does not work well with "FE_INVALID" exceptions enabled. -fsanitize=nonnull-attribute This option enables instrumentation of calls, checking whether null values are not passed to arguments marked as requiring a non-null value by the "nonnull" function attribute. -fsanitize=returns-nonnull-attribute This option enables instrumentation of return statements in functions marked with "returns_nonnull" function attribute, to detect returning of null values from such functions. -fsanitize=bool This option enables instrumentation of loads from bool. If a value other than 0/1 is loaded, a run-time error is issued. -fsanitize=enum This option enables instrumentation of loads from an enum type. If a value outside the range of values for the enum type is loaded, a run-time error is issued. -fsanitize=vptr This option enables instrumentation of C++ member function calls, member accesses and some conversions between pointers to base and derived classes, to verify the referenced object has the correct dynamic type. -fsanitize=pointer-overflow This option enables instrumentation of pointer arithmetics. If the pointer arithmetics overflows, a run-time error is issued. -fsanitize=builtin This option enables instrumentation of arguments to selected builtin functions. If an invalid value is passed to such arguments, a run-time error is issued. E.g. passing 0 as the argument to "__builtin_ctz" or "__builtin_clz" invokes undefined behavior and is diagnosed by this option. While -ftrapv causes traps for signed overflows to be emitted, -fsanitize=undefined gives a diagnostic message. This currently works only for the C family of languages. -fno-sanitize=all This option disables all previously enabled sanitizers. -fsanitize=all is not allowed, as some sanitizers cannot be used together. -fasan-shadow-offset=number This option forces GCC to use custom shadow offset in AddressSanitizer checks. It is useful for experimenting with different shadow memory layouts in Kernel AddressSanitizer. -fsanitize-sections=s1,s2,... Sanitize global variables in selected user-defined sections. si may contain wildcards. -fsanitize-recover[=opts] -fsanitize-recover= controls error recovery mode for sanitizers mentioned in comma-separated list of opts. Enabling this option for a sanitizer component causes it to attempt to continue running the program as if no error happened. This means multiple runtime errors can be reported in a single program run, and the exit code of the program may indicate success even when errors have been reported. The -fno-sanitize-recover= option can be used to alter this behavior: only the first detected error is reported and program then exits with a non-zero exit code. Currently this feature only works for -fsanitize=undefined (and its suboptions except for -fsanitize=unreachable and -fsanitize=return), -fsanitize=float-cast-overflow, -fsanitize=float-divide-by-zero, -fsanitize=bounds-strict, -fsanitize=kernel-address and -fsanitize=address. For these sanitizers error recovery is turned on by default, except -fsanitize=address, for which this feature is experimental. -fsanitize-recover=all and -fno-sanitize-recover=all is also accepted, the former enables recovery for all sanitizers that support it, the latter disables recovery for all sanitizers that support it. Even if a recovery mode is turned on the compiler side, it needs to be also enabled on the runtime library side, otherwise the failures are still fatal. The runtime library defaults to "halt_on_error=0" for ThreadSanitizer and UndefinedBehaviorSanitizer, while default value for AddressSanitizer is "halt_on_error=1". This can be overridden through setting the "halt_on_error" flag in the corresponding environment variable. Syntax without an explicit opts parameter is deprecated. It is equivalent to specifying an opts list of: undefined,float-cast-overflow,float-divide-by-zero,bounds-strict -fsanitize-address-use-after-scope Enable sanitization of local variables to detect use-after- scope bugs. The option sets -fstack-reuse to none. -fsanitize-undefined-trap-on-error The -fsanitize-undefined-trap-on-error option instructs the compiler to report undefined behavior using "__builtin_trap" rather than a "libubsan" library routine. The advantage of this is that the "libubsan" library is not needed and is not linked in, so this is usable even in freestanding environments. -fsanitize-coverage=trace-pc Enable coverage-guided fuzzing code instrumentation. Inserts a call to "__sanitizer_cov_trace_pc" into every basic block. -fsanitize-coverage=trace-cmp Enable dataflow guided fuzzing code instrumentation. Inserts a call to "__sanitizer_cov_trace_cmp1", "__sanitizer_cov_trace_cmp2", "__sanitizer_cov_trace_cmp4" or "__sanitizer_cov_trace_cmp8" for integral comparison with both operands variable or "__sanitizer_cov_trace_const_cmp1", "__sanitizer_cov_trace_const_cmp2", "__sanitizer_cov_trace_const_cmp4" or "__sanitizer_cov_trace_const_cmp8" for integral comparison with one operand constant, "__sanitizer_cov_trace_cmpf" or "__sanitizer_cov_trace_cmpd" for float or double comparisons and "__sanitizer_cov_trace_switch" for switch statements. -fcf-protection=[full|branch|return|none] Enable code instrumentation of control-flow transfers to increase program security by checking that target addresses of control-flow transfer instructions (such as indirect function call, function return, indirect jump) are valid. This prevents diverting the flow of control to an unexpected target. This is intended to protect against such threats as Return-oriented Programming (ROP), and similarly call/jmp-oriented programming (COP/JOP). The value "branch" tells the compiler to implement checking of validity of control-flow transfer at the point of indirect branch instructions, i.e. call/jmp instructions. The value "return" implements checking of validity at the point of returning from a function. The value "full" is an alias for specifying both "branch" and "return". The value "none" turns off instrumentation. The macro "__CET__" is defined when -fcf-protection is used. The first bit of "__CET__" is set to 1 for the value "branch" and the second bit of "__CET__" is set to 1 for the "return". You can also use the "nocf_check" attribute to identify which functions and calls should be skipped from instrumentation. Currently the x86 GNU/Linux target provides an implementation based on Intel Control-flow Enforcement Technology (CET) which works for i686 processor or newer. -fstack-protector Emit extra code to check for buffer overflows, such as stack smashing attacks. This is done by adding a guard variable to functions with vulnerable objects. This includes functions that call "alloca", and functions with buffers larger than 8 bytes. The guards are initialized when a function is entered and then checked when the function exits. If a guard check fails, an error message is printed and the program exits. -fstack-protector-all Like -fstack-protector except that all functions are protected. -fstack-protector-strong Like -fstack-protector but includes additional functions to be protected --- those that have local array definitions, or have references to local frame addresses. -fstack-protector-explicit Like -fstack-protector but only protects those functions which have the "stack_protect" attribute. -fstack-check Generate code to verify that you do not go beyond the boundary of the stack. You should specify this flag if you are running in an environment with multiple threads, but you only rarely need to specify it in a single-threaded environment since stack overflow is automatically detected on nearly all systems if there is only one stack. Note that this switch does not actually cause checking to be done; the operating system or the language runtime must do that. The switch causes generation of code to ensure that they see the stack being extended. You can additionally specify a string parameter: no means no checking, generic means force the use of old-style checking, specific means use the best checking method and is equivalent to bare -fstack-check. Old-style checking is a generic mechanism that requires no specific target support in the compiler but comes with the following drawbacks: 1. Modified allocation strategy for large objects: they are always allocated dynamically if their size exceeds a fixed threshold. Note this may change the semantics of some code. 2. Fixed limit on the size of the static frame of functions: when it is topped by a particular function, stack checking is not reliable and a warning is issued by the compiler. 3. Inefficiency: because of both the modified allocation strategy and the generic implementation, code performance is hampered. Note that old-style stack checking is also the fallback method for specific if no target support has been added in the compiler. -fstack-check= is designed for Ada's needs to detect infinite recursion and stack overflows. specific is an excellent choice when compiling Ada code. It is not generally sufficient to protect against stack-clash attacks. To protect against those you want -fstack-clash-protection. -fstack-clash-protection Generate code to prevent stack clash style attacks. When this option is enabled, the compiler will only allocate one page of stack space at a time and each page is accessed immediately after allocation. Thus, it prevents allocations from jumping over any stack guard page provided by the operating system. Most targets do not fully support stack clash protection. However, on those targets -fstack-clash-protection will protect dynamic stack allocations. -fstack-clash-protection may also provide limited protection for static stack allocations if the target supports -fstack-check=specific. -fstack-limit-register=reg -fstack-limit-symbol=sym -fno-stack-limit Generate code to ensure that the stack does not grow beyond a certain value, either the value of a register or the address of a symbol. If a larger stack is required, a signal is raised at run time. For most targets, the signal is raised before the stack overruns the boundary, so it is possible to catch the signal without taking special precautions. For instance, if the stack starts at absolute address 0x80000000 and grows downwards, you can use the flags -fstack-limit-symbol=__stack_limit and -Wl,--defsym,__stack_limit=0x7ffe0000 to enforce a stack limit of 128KB. Note that this may only work with the GNU linker. You can locally override stack limit checking by using the "no_stack_limit" function attribute. -fsplit-stack Generate code to automatically split the stack before it overflows. The resulting program has a discontiguous stack which can only overflow if the program is unable to allocate any more memory. This is most useful when running threaded programs, as it is no longer necessary to calculate a good stack size to use for each thread. This is currently only implemented for the x86 targets running GNU/Linux. When code compiled with -fsplit-stack calls code compiled without -fsplit-stack, there may not be much stack space available for the latter code to run. If compiling all code, including library code, with -fsplit-stack is not an option, then the linker can fix up these calls so that the code compiled without -fsplit-stack always has a large stack. Support for this is implemented in the gold linker in GNU binutils release 2.21 and later. -fvtable-verify=[std|preinit|none] This option is only available when compiling C++ code. It turns on (or off, if using -fvtable-verify=none) the security feature that verifies at run time, for every virtual call, that the vtable pointer through which the call is made is valid for the type of the object, and has not been corrupted or overwritten. If an invalid vtable pointer is detected at run time, an error is reported and execution of the program is immediately halted. This option causes run-time data structures to be built at program startup, which are used for verifying the vtable pointers. The options std and preinit control the timing of when these data structures are built. In both cases the data structures are built before execution reaches "main". Using -fvtable-verify=std causes the data structures to be built after shared libraries have been loaded and initialized. -fvtable-verify=preinit causes them to be built before shared libraries have been loaded and initialized. If this option appears multiple times in the command line with different values specified, none takes highest priority over both std and preinit; preinit takes priority over std. -fvtv-debug When used in conjunction with -fvtable-verify=std or -fvtable-verify=preinit, causes debug versions of the runtime functions for the vtable verification feature to be called. This flag also causes the compiler to log information about which vtable pointers it finds for each class. This information is written to a file named vtv_set_ptr_data.log in the directory named by the environment variable VTV_LOGS_DIR if that is defined or the current working directory otherwise. Note: This feature appends data to the log file. If you want a fresh log file, be sure to delete any existing one. -fvtv-counts This is a debugging flag. When used in conjunction with -fvtable-verify=std or -fvtable-verify=preinit, this causes the compiler to keep track of the total number of virtual calls it encounters and the number of verifications it inserts. It also counts the number of calls to certain run- time library functions that it inserts and logs this information for each compilation unit. The compiler writes this information to a file named vtv_count_data.log in the directory named by the environment variable VTV_LOGS_DIR if that is defined or the current working directory otherwise. It also counts the size of the vtable pointer sets for each class, and writes this information to vtv_class_set_sizes.log in the same directory. Note: This feature appends data to the log files. To get fresh log files, be sure to delete any existing ones. -finstrument-functions Generate instrumentation calls for entry and exit to functions. Just after function entry and just before function exit, the following profiling functions are called with the address of the current function and its call site. (On some platforms, "__builtin_return_address" does not work beyond the current function, so the call site information may not be available to the profiling functions otherwise.) void __cyg_profile_func_enter (void *this_fn, void *call_site); void __cyg_profile_func_exit (void *this_fn, void *call_site); The first argument is the address of the start of the current function, which may be looked up exactly in the symbol table. This instrumentation is also done for functions expanded inline in other functions. The profiling calls indicate where, conceptually, the inline function is entered and exited. This means that addressable versions of such functions must be available. If all your uses of a function are expanded inline, this may mean an additional expansion of code size. If you use "extern inline" in your C code, an addressable version of such functions must be provided. (This is normally the case anyway, but if you get lucky and the optimizer always expands the functions inline, you might have gotten away without providing static copies.) A function may be given the attribute "no_instrument_function", in which case this instrumentation is not done. This can be used, for example, for the profiling functions listed above, high-priority interrupt routines, and any functions from which the profiling functions cannot safely be called (perhaps signal handlers, if the profiling routines generate output or allocate memory). -finstrument-functions-exclude-file-list=file,file,... Set the list of functions that are excluded from instrumentation (see the description of -finstrument-functions). If the file that contains a function definition matches with one of file, then that function is not instrumented. The match is done on substrings: if the file parameter is a substring of the file name, it is considered to be a match. For example: -finstrument-functions-exclude-file-list=/bits/stl,include/sys excludes any inline function defined in files whose pathnames contain /bits/stl or include/sys. If, for some reason, you want to include letter , in one of sym, write ,. For example, -finstrument-functions-exclude-file-list=',,tmp' (note the single quote surrounding the option). -finstrument-functions-exclude-function-list=sym,sym,... This is similar to -finstrument-functions-exclude-file-list, but this option sets the list of function names to be excluded from instrumentation. The function name to be matched is its user-visible name, such as "vector<int> blah(const vector<int> &)", not the internal mangled name (e.g., "_Z4blahRSt6vectorIiSaIiEE"). The match is done on substrings: if the sym parameter is a substring of the function name, it is considered to be a match. For C99 and C++ extended identifiers, the function name must be given in UTF-8, not using universal character names. -fpatchable-function-entry=N[,M] Generate N NOPs right at the beginning of each function, with the function entry point before the Mth NOP. If M is omitted, it defaults to 0 so the function entry points to the address just at the first NOP. The NOP instructions reserve extra space which can be used to patch in any desired instrumentation at run time, provided that the code segment is writable. The amount of space is controllable indirectly via the number of NOPs; the NOP instruction used corresponds to the instruction emitted by the internal GCC back-end interface "gen_nop". This behavior is target-specific and may also depend on the architecture variant and/or other compilation options. For run-time identification, the starting addresses of these areas, which correspond to their respective function entries minus M, are additionally collected in the "__patchable_function_entries" section of the resulting binary. Note that the value of "__attribute__ ((patchable_function_entry (N,M)))" takes precedence over command-line option -fpatchable-function-entry=N,M. This can be used to increase the area size or to remove it completely on a single function. If "N=0", no pad location is recorded. The NOP instructions are inserted at---and maybe before, depending on M---the function entry address, even before the prologue. Options Controlling the Preprocessor These options control the C preprocessor, which is run on each C source file before actual compilation. If you use the -E option, nothing is done except preprocessing. Some of these options make sense only together with -E because they cause the preprocessor output to be unsuitable for actual compilation. In addition to the options listed here, there are a number of options to control search paths for include files documented in Directory Options. Options to control preprocessor diagnostics are listed in Warning Options. -D name Predefine name as a macro, with definition 1. -D name=definition The contents of definition are tokenized and processed as if they appeared during translation phase three in a #define directive. In particular, the definition is truncated by embedded newline characters. If you are invoking the preprocessor from a shell or shell- like program you may need to use the shell's quoting syntax to protect characters such as spaces that have a meaning in the shell syntax. If you wish to define a function-like macro on the command line, write its argument list with surrounding parentheses before the equals sign (if any). Parentheses are meaningful to most shells, so you should quote the option. With sh and csh, -D'name(args...)=definition' works. -D and -U options are processed in the order they are given on the command line. All -imacros file and -include file options are processed after all -D and -U options. -U name Cancel any previous definition of name, either built in or provided with a -D option. -include file Process file as if "#include "file"" appeared as the first line of the primary source file. However, the first directory searched for file is the preprocessor's working directory instead of the directory containing the main source file. If not found there, it is searched for in the remainder of the "#include "..."" search chain as normal. If multiple -include options are given, the files are included in the order they appear on the command line. -imacros file Exactly like -include, except that any output produced by scanning file is thrown away. Macros it defines remain defined. This allows you to acquire all the macros from a header without also processing its declarations. All files specified by -imacros are processed before all files specified by -include. -undef Do not predefine any system-specific or GCC-specific macros. The standard predefined macros remain defined. -pthread Define additional macros required for using the POSIX threads library. You should use this option consistently for both compilation and linking. This option is supported on GNU/Linux targets, most other Unix derivatives, and also on x86 Cygwin and MinGW targets. -M Instead of outputting the result of preprocessing, output a rule suitable for make describing the dependencies of the main source file. The preprocessor outputs one make rule containing the object file name for that source file, a colon, and the names of all the included files, including those coming from -include or -imacros command-line options. Unless specified explicitly (with -MT or -MQ), the object file name consists of the name of the source file with any suffix replaced with object file suffix and with any leading directory parts removed. If there are many included files then the rule is split into several lines using \-newline. The rule has no commands. This option does not suppress the preprocessor's debug output, such as -dM. To avoid mixing such debug output with the dependency rules you should explicitly specify the dependency output file with -MF, or use an environment variable like DEPENDENCIES_OUTPUT. Debug output is still sent to the regular output stream as normal. Passing -M to the driver implies -E, and suppresses warnings with an implicit -w. -MM Like -M but do not mention header files that are found in system header directories, nor header files that are included, directly or indirectly, from such a header. This implies that the choice of angle brackets or double quotes in an #include directive does not in itself determine whether that header appears in -MM dependency output. -MF file When used with -M or -MM, specifies a file to write the dependencies to. If no -MF switch is given the preprocessor sends the rules to the same place it would send preprocessed output. When used with the driver options -MD or -MMD, -MF overrides the default dependency output file. If file is -, then the dependencies are written to stdout. -MG In conjunction with an option such as -M requesting dependency generation, -MG assumes missing header files are generated files and adds them to the dependency list without raising an error. The dependency filename is taken directly from the "#include" directive without prepending any path. -MG also suppresses preprocessed output, as a missing header file renders this useless. This feature is used in automatic updating of makefiles. -MP This option instructs CPP to add a phony target for each dependency other than the main file, causing each to depend on nothing. These dummy rules work around errors make gives if you remove header files without updating the Makefile to match. This is typical output: test.o: test.c test.h test.h: -MT target Change the target of the rule emitted by dependency generation. By default CPP takes the name of the main input file, deletes any directory components and any file suffix such as .c, and appends the platform's usual object suffix. The result is the target. An -MT option sets the target to be exactly the string you specify. If you want multiple targets, you can specify them as a single argument to -MT, or use multiple -MT options. For example, -MT '$(objpfx)foo.o' might give $(objpfx)foo.o: foo.c -MQ target Same as -MT, but it quotes any characters which are special to Make. -MQ '$(objpfx)foo.o' gives $$(objpfx)foo.o: foo.c The default target is automatically quoted, as if it were given with -MQ. -MD -MD is equivalent to -M -MF file, except that -E is not implied. The driver determines file based on whether an -o option is given. If it is, the driver uses its argument but with a suffix of .d, otherwise it takes the name of the input file, removes any directory components and suffix, and applies a .d suffix. If -MD is used in conjunction with -E, any -o switch is understood to specify the dependency output file, but if used without -E, each -o is understood to specify a target object file. Since -E is not implied, -MD can be used to generate a dependency output file as a side effect of the compilation process. -MMD Like -MD except mention only user header files, not system header files. -fpreprocessed Indicate to the preprocessor that the input file has already been preprocessed. This suppresses things like macro expansion, trigraph conversion, escaped newline splicing, and processing of most directives. The preprocessor still recognizes and removes comments, so that you can pass a file preprocessed with -C to the compiler without problems. In this mode the integrated preprocessor is little more than a tokenizer for the front ends. -fpreprocessed is implicit if the input file has one of the extensions .i, .ii or .mi. These are the extensions that GCC uses for preprocessed files created by -save-temps. -fdirectives-only When preprocessing, handle directives, but do not expand macros. The option's behavior depends on the -E and -fpreprocessed options. With -E, preprocessing is limited to the handling of directives such as "#define", "#ifdef", and "#error". Other preprocessor operations, such as macro expansion and trigraph conversion are not performed. In addition, the -dD option is implicitly enabled. With -fpreprocessed, predefinition of command line and most builtin macros is disabled. Macros such as "__LINE__", which are contextually dependent, are handled normally. This enables compilation of files previously preprocessed with "-E -fdirectives-only". With both -E and -fpreprocessed, the rules for -fpreprocessed take precedence. This enables full preprocessing of files previously preprocessed with "-E -fdirectives-only". -fdollars-in-identifiers Accept $ in identifiers. -fextended-identifiers Accept universal character names in identifiers. This option is enabled by default for C99 (and later C standard versions) and C++. -fno-canonical-system-headers When preprocessing, do not shorten system header paths with canonicalization. -ftabstop=width Set the distance between tab stops. This helps the preprocessor report correct column numbers in warnings or errors, even if tabs appear on the line. If the value is less than 1 or greater than 100, the option is ignored. The default is 8. -ftrack-macro-expansion[=level] Track locations of tokens across macro expansions. This allows the compiler to emit diagnostic about the current macro expansion stack when a compilation error occurs in a macro expansion. Using this option makes the preprocessor and the compiler consume more memory. The level parameter can be used to choose the level of precision of token location tracking thus decreasing the memory consumption if necessary. Value 0 of level de-activates this option. Value 1 tracks tokens locations in a degraded mode for the sake of minimal memory overhead. In this mode all tokens resulting from the expansion of an argument of a function-like macro have the same location. Value 2 tracks tokens locations completely. This value is the most memory hungry. When this option is given no argument, the default parameter value is 2. Note that "-ftrack-macro-expansion=2" is activated by default. -fmacro-prefix-map=old=new When preprocessing files residing in directory old, expand the "__FILE__" and "__BASE_FILE__" macros as if the files resided in directory new instead. This can be used to change an absolute path to a relative path by using . for new which can result in more reproducible builds that are location independent. This option also affects "__builtin_FILE()" during compilation. See also -ffile-prefix-map. -fexec-charset=charset Set the execution character set, used for string and character constants. The default is UTF-8. charset can be any encoding supported by the system's "iconv" library routine. -fwide-exec-charset=charset Set the wide execution character set, used for wide string and character constants. The default is UTF-32 or UTF-16, whichever corresponds to the width of "wchar_t". As with -fexec-charset, charset can be any encoding supported by the system's "iconv" library routine; however, you will have problems with encodings that do not fit exactly in "wchar_t". -finput-charset=charset Set the input character set, used for translation from the character set of the input file to the source character set used by GCC. If the locale does not specify, or GCC cannot get this information from the locale, the default is UTF-8. This can be overridden by either the locale or this command- line option. Currently the command-line option takes precedence if there's a conflict. charset can be any encoding supported by the system's "iconv" library routine. -fpch-deps When using precompiled headers, this flag causes the dependency-output flags to also list the files from the precompiled header's dependencies. If not specified, only the precompiled header are listed and not the files that were used to create it, because those files are not consulted when a precompiled header is used. -fpch-preprocess This option allows use of a precompiled header together with -E. It inserts a special "#pragma", "#pragma GCC pch_preprocess "filename"" in the output to mark the place where the precompiled header was found, and its filename. When -fpreprocessed is in use, GCC recognizes this "#pragma" and loads the PCH. This option is off by default, because the resulting preprocessed output is only really suitable as input to GCC. It is switched on by -save-temps. You should not write this "#pragma" in your own code, but it is safe to edit the filename if the PCH file is available in a different location. The filename may be absolute or it may be relative to GCC's current directory. -fworking-directory Enable generation of linemarkers in the preprocessor output that let the compiler know the current working directory at the time of preprocessing. When this option is enabled, the preprocessor emits, after the initial linemarker, a second linemarker with the current working directory followed by two slashes. GCC uses this directory, when it's present in the preprocessed input, as the directory emitted as the current working directory in some debugging information formats. This option is implicitly enabled if debugging information is enabled, but this can be inhibited with the negated form -fno-working-directory. If the -P flag is present in the command line, this option has no effect, since no "#line" directives are emitted whatsoever. -A predicate=answer Make an assertion with the predicate predicate and answer answer. This form is preferred to the older form -A predicate(answer), which is still supported, because it does not use shell special characters. -A -predicate=answer Cancel an assertion with the predicate predicate and answer answer. -C Do not discard comments. All comments are passed through to the output file, except for comments in processed directives, which are deleted along with the directive. You should be prepared for side effects when using -C; it causes the preprocessor to treat comments as tokens in their own right. For example, comments appearing at the start of what would be a directive line have the effect of turning that line into an ordinary source line, since the first token on the line is no longer a #. -CC Do not discard comments, including during macro expansion. This is like -C, except that comments contained within macros are also passed through to the output file where the macro is expanded. In addition to the side effects of the -C option, the -CC option causes all C++-style comments inside a macro to be converted to C-style comments. This is to prevent later use of that macro from inadvertently commenting out the remainder of the source line. The -CC option is generally used to support lint comments. -P Inhibit generation of linemarkers in the output from the preprocessor. This might be useful when running the preprocessor on something that is not C code, and will be sent to a program which might be confused by the linemarkers. -traditional -traditional-cpp Try to imitate the behavior of pre-standard C preprocessors, as opposed to ISO C preprocessors. See the GNU CPP manual for details. Note that GCC does not otherwise attempt to emulate a pre- standard C compiler, and these options are only supported with the -E switch, or when invoking CPP explicitly. -trigraphs Support ISO C trigraphs. These are three-character sequences, all starting with ??, that are defined by ISO C to stand for single characters. For example, ??/ stands for \, so '??/n' is a character constant for a newline. The nine trigraphs and their replacements are Trigraph: ??( ??) ??< ??> ??= ??/ ??' ??! ??- Replacement: [ ] { } # \ ^ | ~ By default, GCC ignores trigraphs, but in standard-conforming modes it converts them. See the -std and -ansi options. -remap Enable special code to work around file systems which only permit very short file names, such as MS-DOS. -H Print the name of each header file used, in addition to other normal activities. Each name is indented to show how deep in the #include stack it is. Precompiled header files are also printed, even if they are found to be invalid; an invalid precompiled header file is printed with ...x and a valid one with ...! . -dletters Says to make debugging dumps during compilation as specified by letters. The flags documented here are those relevant to the preprocessor. Other letters are interpreted by the compiler proper, or reserved for future versions of GCC, and so are silently ignored. If you specify letters whose behavior conflicts, the result is undefined. -dM Instead of the normal output, generate a list of #define directives for all the macros defined during the execution of the preprocessor, including predefined macros. This gives you a way of finding out what is predefined in your version of the preprocessor. Assuming you have no file foo.h, the command touch foo.h; cpp -dM foo.h shows all the predefined macros. If you use -dM without the -E option, -dM is interpreted as a synonym for -fdump-rtl-mach. -dD Like -dM except in two respects: it does not include the predefined macros, and it outputs both the #define directives and the result of preprocessing. Both kinds of output go to the standard output file. -dN Like -dD, but emit only the macro names, not their expansions. -dI Output #include directives in addition to the result of preprocessing. -dU Like -dD except that only macros that are expanded, or whose definedness is tested in preprocessor directives, are output; the output is delayed until the use or test of the macro; and #undef directives are also output for macros tested but undefined at the time. -fdebug-cpp This option is only useful for debugging GCC. When used from CPP or with -E, it dumps debugging information about location maps. Every token in the output is preceded by the dump of the map its location belongs to. When used from GCC without -E, this option has no effect. -Wp,option You can use -Wp,option to bypass the compiler driver and pass option directly through to the preprocessor. If option contains commas, it is split into multiple options at the commas. However, many options are modified, translated or interpreted by the compiler driver before being passed to the preprocessor, and -Wp forcibly bypasses this phase. The preprocessor's direct interface is undocumented and subject to change, so whenever possible you should avoid using -Wp and let the driver handle the options instead. -Xpreprocessor option Pass option as an option to the preprocessor. You can use this to supply system-specific preprocessor options that GCC does not recognize. If you want to pass an option that takes an argument, you must use -Xpreprocessor twice, once for the option and once for the argument. -no-integrated-cpp Perform preprocessing as a separate pass before compilation. By default, GCC performs preprocessing as an integrated part of input tokenization and parsing. If this option is provided, the appropriate language front end (cc1, cc1plus, or cc1obj for C, C++, and Objective-C, respectively) is instead invoked twice, once for preprocessing only and once for actual compilation of the preprocessed input. This option may be useful in conjunction with the -B or -wrapper options to specify an alternate preprocessor or perform additional processing of the program source between normal preprocessing and compilation. Passing Options to the Assembler You can pass options to the assembler. -Wa,option Pass option as an option to the assembler. If option contains commas, it is split into multiple options at the commas. -Xassembler option Pass option as an option to the assembler. You can use this to supply system-specific assembler options that GCC does not recognize. If you want to pass an option that takes an argument, you must use -Xassembler twice, once for the option and once for the argument. Options for Linking These options come into play when the compiler links object files into an executable output file. They are meaningless if the compiler is not doing a link step. object-file-name A file name that does not end in a special recognized suffix is considered to name an object file or library. (Object files are distinguished from libraries by the linker according to the file contents.) If linking is done, these object files are used as input to the linker. -c -S -E If any of these options is used, then the linker is not run, and object file names should not be used as arguments. -flinker-output=type This option controls the code generation of the link time optimizer. By default the linker output is determined by the linker plugin automatically. For debugging the compiler and in the case of incremental linking to non-lto object file is desired, it may be useful to control the type manually. If type is exec the code generation is configured to produce static binary. In this case -fpic and -fpie are both disabled. If type is dyn the code generation is configured to produce shared library. In this case -fpic or -fPIC is preserved, but not enabled automatically. This makes it possible to build shared libraries without position independent code on architectures this is possible, i.e. on x86. If type is pie the code generation is configured to produce -fpie executable. This result in similar optimizations as exec except that -fpie is not disabled if specified at compilation time. If type is rel the compiler assumes that incremental linking is done. The sections containing intermediate code for link- time optimization are merged, pre-optimized, and output to the resulting object file. In addition, if -ffat-lto-objects is specified the binary code is produced for future non-lto linking. The object file produced by incremental linking will be smaller than a static library produced from the same object files. At link-time the result of incremental linking will also load faster to compiler than a static library assuming that majority of objects in the library are used. Finally nolto-rel configure compiler to for incremental linking where code generation is forced, final binary is produced and the intermediate code for later link-time optimization is stripped. When multiple object files are linked together the resulting code will be optimized better than with link time optimizations disabled (for example, the cross-module inlining will happen), most of benefits of whole program optimizations are however lost. During the incremental link (by -r) the linker plugin will default to rel. With current interfaces to GNU Binutils it is however not possible to link incrementally LTO objects and non-LTO objects into a single mixed object file. In the case any of object files in incremental link cannot be used for link-time optimization the linker plugin will output warning and use nolto-rel. To maintain the whole program optimization it is recommended to link such objects into static library instead. Alternatively it is possible to use H.J. Lu's binutils with support for mixed objects. -fuse-ld=bfd Use the bfd linker instead of the default linker. -fuse-ld=gold Use the gold linker instead of the default linker. -fuse-ld=lld Use the LLVM lld linker instead of the default linker. -llibrary -l library Search the library named library when linking. (The second alternative with the library as a separate argument is only for POSIX compliance and is not recommended.) The -l option is passed directly to the linker by GCC. Refer to your linker documentation for exact details. The general description below applies to the GNU linker. The linker searches a standard list of directories for the library. The directories searched include several standard system directories plus any that you specify with -L. Static libraries are archives of object files, and have file names like liblibrary.a. Some targets also support shared libraries, which typically have names like liblibrary.so. If both static and shared libraries are found, the linker gives preference to linking with the shared library unless the -static option is used. It makes a difference where in the command you write this option; the linker searches and processes libraries and object files in the order they are specified. Thus, foo.o -lz bar.o searches library z after file foo.o but before bar.o. If bar.o refers to functions in z, those functions may not be loaded. -lobjc You need this special case of the -l option in order to link an Objective-C or Objective-C++ program. -nostartfiles Do not use the standard system startup files when linking. The standard system libraries are used normally, unless -nostdlib, -nolibc, or -nodefaultlibs is used. -nodefaultlibs Do not use the standard system libraries when linking. Only the libraries you specify are passed to the linker, and options specifying linkage of the system libraries, such as -static-libgcc or -shared-libgcc, are ignored. The standard startup files are used normally, unless -nostartfiles is used. The compiler may generate calls to "memcmp", "memset", "memcpy" and "memmove". These entries are usually resolved by entries in libc. These entry points should be supplied through some other mechanism when this option is specified. -nolibc Do not use the C library or system libraries tightly coupled with it when linking. Still link with the startup files, libgcc or toolchain provided language support libraries such as libgnat, libgfortran or libstdc++ unless options preventing their inclusion are used as well. This typically removes -lc from the link command line, as well as system libraries that normally go with it and become meaningless when absence of a C library is assumed, for example -lpthread or -lm in some configurations. This is intended for bare- board targets when there is indeed no C library available. -nostdlib Do not use the standard system startup files or libraries when linking. No startup files and only the libraries you specify are passed to the linker, and options specifying linkage of the system libraries, such as -static-libgcc or -shared-libgcc, are ignored. The compiler may generate calls to "memcmp", "memset", "memcpy" and "memmove". These entries are usually resolved by entries in libc. These entry points should be supplied through some other mechanism when this option is specified. One of the standard libraries bypassed by -nostdlib and -nodefaultlibs is libgcc.a, a library of internal subroutines which GCC uses to overcome shortcomings of particular machines, or special needs for some languages. In most cases, you need libgcc.a even when you want to avoid other standard libraries. In other words, when you specify -nostdlib or -nodefaultlibs you should usually specify -lgcc as well. This ensures that you have no unresolved references to internal GCC library subroutines. (An example of such an internal subroutine is "__main", used to ensure C++ constructors are called.) -e entry --entry=entry Specify that the program entry point is entry. The argument is interpreted by the linker; the GNU linker accepts either a symbol name or an address. -pie Produce a dynamically linked position independent executable on targets that support it. For predictable results, you must also specify the same set of options used for compilation (-fpie, -fPIE, or model suboptions) when you specify this linker option. -no-pie Don't produce a dynamically linked position independent executable. -static-pie Produce a static position independent executable on targets that support it. A static position independent executable is similar to a static executable, but can be loaded at any address without a dynamic linker. For predictable results, you must also specify the same set of options used for compilation (-fpie, -fPIE, or model suboptions) when you specify this linker option. -pthread Link with the POSIX threads library. This option is supported on GNU/Linux targets, most other Unix derivatives, and also on x86 Cygwin and MinGW targets. On some targets this option also sets flags for the preprocessor, so it should be used consistently for both compilation and linking. -r Produce a relocatable object as output. This is also known as partial linking. -rdynamic Pass the flag -export-dynamic to the ELF linker, on targets that support it. This instructs the linker to add all symbols, not only used ones, to the dynamic symbol table. This option is needed for some uses of "dlopen" or to allow obtaining backtraces from within a program. -s Remove all symbol table and relocation information from the executable. -static On systems that support dynamic linking, this overrides -pie and prevents linking with the shared libraries. On other systems, this option has no effect. -shared Produce a shared object which can then be linked with other objects to form an executable. Not all systems support this option. For predictable results, you must also specify the same set of options used for compilation (-fpic, -fPIC, or model suboptions) when you specify this linker option.[1] -shared-libgcc -static-libgcc On systems that provide libgcc as a shared library, these options force the use of either the shared or static version, respectively. If no shared version of libgcc was built when the compiler was configured, these options have no effect. There are several situations in which an application should use the shared libgcc instead of the static version. The most common of these is when the application wishes to throw and catch exceptions across different shared libraries. In that case, each of the libraries as well as the application itself should use the shared libgcc. Therefore, the G++ driver automatically adds -shared-libgcc whenever you build a shared library or a main executable, because C++ programs typically use exceptions, so this is the right thing to do. If, instead, you use the GCC driver to create shared libraries, you may find that they are not always linked with the shared libgcc. If GCC finds, at its configuration time, that you have a non-GNU linker or a GNU linker that does not support option --eh-frame-hdr, it links the shared version of libgcc into shared libraries by default. Otherwise, it takes advantage of the linker and optimizes away the linking with the shared version of libgcc, linking with the static version of libgcc by default. This allows exceptions to propagate through such shared libraries, without incurring relocation costs at library load time. However, if a library or main executable is supposed to throw or catch exceptions, you must link it using the G++ driver, or using the option -shared-libgcc, such that it is linked with the shared libgcc. -static-libasan When the -fsanitize=address option is used to link a program, the GCC driver automatically links against libasan. If libasan is available as a shared library, and the -static option is not used, then this links against the shared version of libasan. The -static-libasan option directs the GCC driver to link libasan statically, without necessarily linking other libraries statically. -static-libtsan When the -fsanitize=thread option is used to link a program, the GCC driver automatically links against libtsan. If libtsan is available as a shared library, and the -static option is not used, then this links against the shared version of libtsan. The -static-libtsan option directs the GCC driver to link libtsan statically, without necessarily linking other libraries statically. -static-liblsan When the -fsanitize=leak option is used to link a program, the GCC driver automatically links against liblsan. If liblsan is available as a shared library, and the -static option is not used, then this links against the shared version of liblsan. The -static-liblsan option directs the GCC driver to link liblsan statically, without necessarily linking other libraries statically. -static-libubsan When the -fsanitize=undefined option is used to link a program, the GCC driver automatically links against libubsan. If libubsan is available as a shared library, and the -static option is not used, then this links against the shared version of libubsan. The -static-libubsan option directs the GCC driver to link libubsan statically, without necessarily linking other libraries statically. -static-libstdc++ When the g++ program is used to link a C++ program, it normally automatically links against libstdc++. If libstdc++ is available as a shared library, and the -static option is not used, then this links against the shared version of libstdc++. That is normally fine. However, it is sometimes useful to freeze the version of libstdc++ used by the program without going all the way to a fully static link. The -static-libstdc++ option directs the g++ driver to link libstdc++ statically, without necessarily linking other libraries statically. -symbolic Bind references to global symbols when building a shared object. Warn about any unresolved references (unless overridden by the link editor option -Xlinker -z -Xlinker defs). Only a few systems support this option. -T script Use script as the linker script. This option is supported by most systems using the GNU linker. On some targets, such as bare-board targets without an operating system, the -T option may be required when linking to avoid references to undefined symbols. -Xlinker option Pass option as an option to the linker. You can use this to supply system-specific linker options that GCC does not recognize. If you want to pass an option that takes a separate argument, you must use -Xlinker twice, once for the option and once for the argument. For example, to pass -assert definitions, you must write -Xlinker -assert -Xlinker definitions. It does not work to write -Xlinker "-assert definitions", because this passes the entire string as a single argument, which is not what the linker expects. When using the GNU linker, it is usually more convenient to pass arguments to linker options using the option=value syntax than as separate arguments. For example, you can specify -Xlinker -Map=output.map rather than -Xlinker -Map -Xlinker output.map. Other linkers may not support this syntax for command-line options. -Wl,option Pass option as an option to the linker. If option contains commas, it is split into multiple options at the commas. You can use this syntax to pass an argument to the option. For example, -Wl,-Map,output.map passes -Map output.map to the linker. When using the GNU linker, you can also get the same effect with -Wl,-Map=output.map. -u symbol Pretend the symbol symbol is undefined, to force linking of library modules to define it. You can use -u multiple times with different symbols to force loading of additional library modules. -z keyword -z is passed directly on to the linker along with the keyword keyword. See the section in the documentation of your linker for permitted values and their meanings. Options for Directory Search These options specify directories to search for header files, for libraries and for parts of the compiler: -I dir -iquote dir -isystem dir -idirafter dir Add the directory dir to the list of directories to be searched for header files during preprocessing. If dir begins with = or $SYSROOT, then the = or $SYSROOT is replaced by the sysroot prefix; see --sysroot and -isysroot. Directories specified with -iquote apply only to the quote form of the directive, "#include "file"". Directories specified with -I, -isystem, or -idirafter apply to lookup for both the "#include "file"" and "#include <file>" directives. You can specify any number or combination of these options on the command line to search for header files in several directories. The lookup order is as follows: 1. For the quote form of the include directive, the directory of the current file is searched first. 2. For the quote form of the include directive, the directories specified by -iquote options are searched in left-to-right order, as they appear on the command line. 3. Directories specified with -I options are scanned in left-to-right order. 4. Directories specified with -isystem options are scanned in left-to-right order. 5. Standard system directories are scanned. 6. Directories specified with -idirafter options are scanned in left-to-right order. You can use -I to override a system header file, substituting your own version, since these directories are searched before the standard system header file directories. However, you should not use this option to add directories that contain vendor-supplied system header files; use -isystem for that. The -isystem and -idirafter options also mark the directory as a system directory, so that it gets the same special treatment that is applied to the standard system directories. If a standard system include directory, or a directory specified with -isystem, is also specified with -I, the -I option is ignored. The directory is still searched but as a system directory at its normal position in the system include chain. This is to ensure that GCC's procedure to fix buggy system headers and the ordering for the "#include_next" directive are not inadvertently changed. If you really need to change the search order for system directories, use the -nostdinc and/or -isystem options. -I- Split the include path. This option has been deprecated. Please use -iquote instead for -I directories before the -I- and remove the -I- option. Any directories specified with -I options before -I- are searched only for headers requested with "#include "file""; they are not searched for "#include <file>". If additional directories are specified with -I options after the -I-, those directories are searched for all #include directives. In addition, -I- inhibits the use of the directory of the current file directory as the first search directory for "#include "file"". There is no way to override this effect of -I-. -iprefix prefix Specify prefix as the prefix for subsequent -iwithprefix options. If the prefix represents a directory, you should include the final /. -iwithprefix dir -iwithprefixbefore dir Append dir to the prefix specified previously with -iprefix, and add the resulting directory to the include search path. -iwithprefixbefore puts it in the same place -I would; -iwithprefix puts it where -idirafter would. -isysroot dir This option is like the --sysroot option, but applies only to header files (except for Darwin targets, where it applies to both header files and libraries). See the --sysroot option for more information. -imultilib dir Use dir as a subdirectory of the directory containing target- specific C++ headers. -nostdinc Do not search the standard system directories for header files. Only the directories explicitly specified with -I, -iquote, -isystem, and/or -idirafter options (and the directory of the current file, if appropriate) are searched. -nostdinc++ Do not search for header files in the C++-specific standard directories, but do still search the other standard directories. (This option is used when building the C++ library.) -iplugindir=dir Set the directory to search for plugins that are passed by -fplugin=name instead of -fplugin=path/name.so. This option is not meant to be used by the user, but only passed by the driver. -Ldir Add directory dir to the list of directories to be searched for -l. -Bprefix This option specifies where to find the executables, libraries, include files, and data files of the compiler itself. The compiler driver program runs one or more of the subprograms cpp, cc1, as and ld. It tries prefix as a prefix for each program it tries to run, both with and without machine/version/ for the corresponding target machine and compiler version. For each subprogram to be run, the compiler driver first tries the -B prefix, if any. If that name is not found, or if -B is not specified, the driver tries two standard prefixes, /usr/lib/gcc/ and /usr/local/lib/gcc/. If neither of those results in a file name that is found, the unmodified program name is searched for using the directories specified in your PATH environment variable. The compiler checks to see if the path provided by -B refers to a directory, and if necessary it adds a directory separator character at the end of the path. -B prefixes that effectively specify directory names also apply to libraries in the linker, because the compiler translates these options into -L options for the linker. They also apply to include files in the preprocessor, because the compiler translates these options into -isystem options for the preprocessor. In this case, the compiler appends include to the prefix. The runtime support file libgcc.a can also be searched for using the -B prefix, if needed. If it is not found there, the two standard prefixes above are tried, and that is all. The file is left out of the link if it is not found by those means. Another way to specify a prefix much like the -B prefix is to use the environment variable GCC_EXEC_PREFIX. As a special kludge, if the path provided by -B is [dir/]stageN/, where N is a number in the range 0 to 9, then it is replaced by [dir/]include. This is to help with boot- strapping the compiler. -no-canonical-prefixes Do not expand any symbolic links, resolve references to /../ or /./, or make the path absolute when generating a relative prefix. --sysroot=dir Use dir as the logical root directory for headers and libraries. For example, if the compiler normally searches for headers in /usr/include and libraries in /usr/lib, it instead searches dir/usr/include and dir/usr/lib. If you use both this option and the -isysroot option, then the --sysroot option applies to libraries, but the -isysroot option applies to header files. The GNU linker (beginning with version 2.16) has the necessary support for this option. If your linker does not support this option, the header file aspect of --sysroot still works, but the library aspect does not. --no-sysroot-suffix For some targets, a suffix is added to the root directory specified with --sysroot, depending on the other options used, so that headers may for example be found in dir/suffix/usr/include instead of dir/usr/include. This option disables the addition of such a suffix. Options for Code Generation Conventions These machine-independent options control the interface conventions used in code generation. Most of them have both positive and negative forms; the negative form of -ffoo is -fno-foo. In the table below, only one of the forms is listed---the one that is not the default. You can figure out the other form by either removing no- or adding it. -fstack-reuse=reuse-level This option controls stack space reuse for user declared local/auto variables and compiler generated temporaries. reuse_level can be all, named_vars, or none. all enables stack reuse for all local variables and temporaries, named_vars enables the reuse only for user defined local variables with names, and none disables stack reuse completely. The default value is all. The option is needed when the program extends the lifetime of a scoped local variable or a compiler generated temporary beyond the end point defined by the language. When a lifetime of a variable ends, and if the variable lives in memory, the optimizing compiler has the freedom to reuse its stack space with other temporaries or scoped local variables whose live range does not overlap with it. Legacy code extending local lifetime is likely to break with the stack reuse optimization. For example, int *p; { int local1; p = &local1; local1 = 10; .... } { int local2; local2 = 20; ... } if (*p == 10) // out of scope use of local1 { } Another example: struct A { A(int k) : i(k), j(k) { } int i; int j; }; A *ap; void foo(const A& ar) { ap = &ar; } void bar() { foo(A(10)); // temp object's lifetime ends when foo returns { A a(20); .... } ap->i+= 10; // ap references out of scope temp whose space // is reused with a. What is the value of ap->i? } The lifetime of a compiler generated temporary is well defined by the C++ standard. When a lifetime of a temporary ends, and if the temporary lives in memory, the optimizing compiler has the freedom to reuse its stack space with other temporaries or scoped local variables whose live range does not overlap with it. However some of the legacy code relies on the behavior of older compilers in which temporaries' stack space is not reused, the aggressive stack reuse can lead to runtime errors. This option is used to control the temporary stack reuse optimization. -ftrapv This option generates traps for signed overflow on addition, subtraction, multiplication operations. The options -ftrapv and -fwrapv override each other, so using -ftrapv -fwrapv on the command-line results in -fwrapv being effective. Note that only active options override, so using -ftrapv -fwrapv -fno-wrapv on the command-line results in -ftrapv being effective. -fwrapv This option instructs the compiler to assume that signed arithmetic overflow of addition, subtraction and multiplication wraps around using twos-complement representation. This flag enables some optimizations and disables others. The options -ftrapv and -fwrapv override each other, so using -ftrapv -fwrapv on the command-line results in -fwrapv being effective. Note that only active options override, so using -ftrapv -fwrapv -fno-wrapv on the command-line results in -ftrapv being effective. -fwrapv-pointer This option instructs the compiler to assume that pointer arithmetic overflow on addition and subtraction wraps around using twos-complement representation. This flag disables some optimizations which assume pointer overflow is invalid. -fstrict-overflow This option implies -fno-wrapv -fno-wrapv-pointer and when negated implies -fwrapv -fwrapv-pointer. -fexceptions Enable exception handling. Generates extra code needed to propagate exceptions. For some targets, this implies GCC generates frame unwind information for all functions, which can produce significant data size overhead, although it does not affect execution. If you do not specify this option, GCC enables it by default for languages like C++ that normally require exception handling, and disables it for languages like C that do not normally require it. However, you may need to enable this option when compiling C code that needs to interoperate properly with exception handlers written in C++. You may also wish to disable this option if you are compiling older C++ programs that don't use exception handling. -fnon-call-exceptions Generate code that allows trapping instructions to throw exceptions. Note that this requires platform-specific runtime support that does not exist everywhere. Moreover, it only allows trapping instructions to throw exceptions, i.e. memory references or floating-point instructions. It does not allow exceptions to be thrown from arbitrary signal handlers such as "SIGALRM". -fdelete-dead-exceptions Consider that instructions that may throw exceptions but don't otherwise contribute to the execution of the program can be optimized away. This option is enabled by default for the Ada front end, as permitted by the Ada language specification. Optimization passes that cause dead exceptions to be removed are enabled independently at different optimization levels. -funwind-tables Similar to -fexceptions, except that it just generates any needed static data, but does not affect the generated code in any other way. You normally do not need to enable this option; instead, a language processor that needs this handling enables it on your behalf. -fasynchronous-unwind-tables Generate unwind table in DWARF format, if supported by target machine. The table is exact at each instruction boundary, so it can be used for stack unwinding from asynchronous events (such as debugger or garbage collector). -fno-gnu-unique On systems with recent GNU assembler and C library, the C++ compiler uses the "STB_GNU_UNIQUE" binding to make sure that definitions of template static data members and static local variables in inline functions are unique even in the presence of "RTLD_LOCAL"; this is necessary to avoid problems with a library used by two different "RTLD_LOCAL" plugins depending on a definition in one of them and therefore disagreeing with the other one about the binding of the symbol. But this causes "dlclose" to be ignored for affected DSOs; if your program relies on reinitialization of a DSO via "dlclose" and "dlopen", you can use -fno-gnu-unique. -fpcc-struct-return Return "short" "struct" and "union" values in memory like longer ones, rather than in registers. This convention is less efficient, but it has the advantage of allowing intercallability between GCC-compiled files and files compiled with other compilers, particularly the Portable C Compiler (pcc). The precise convention for returning structures in memory depends on the target configuration macros. Short structures and unions are those whose size and alignment match that of some integer type. Warning: code compiled with the -fpcc-struct-return switch is not binary compatible with code compiled with the -freg-struct-return switch. Use it to conform to a non- default application binary interface. -freg-struct-return Return "struct" and "union" values in registers when possible. This is more efficient for small structures than -fpcc-struct-return. If you specify neither -fpcc-struct-return nor -freg-struct-return, GCC defaults to whichever convention is standard for the target. If there is no standard convention, GCC defaults to -fpcc-struct-return, except on targets where GCC is the principal compiler. In those cases, we can choose the standard, and we chose the more efficient register return alternative. Warning: code compiled with the -freg-struct-return switch is not binary compatible with code compiled with the -fpcc-struct-return switch. Use it to conform to a non- default application binary interface. -fshort-enums Allocate to an "enum" type only as many bytes as it needs for the declared range of possible values. Specifically, the "enum" type is equivalent to the smallest integer type that has enough room. Warning: the -fshort-enums switch causes GCC to generate code that is not binary compatible with code generated without that switch. Use it to conform to a non-default application binary interface. -fshort-wchar Override the underlying type for "wchar_t" to be "short unsigned int" instead of the default for the target. This option is useful for building programs to run under WINE. Warning: the -fshort-wchar switch causes GCC to generate code that is not binary compatible with code generated without that switch. Use it to conform to a non-default application binary interface. -fno-common In C code, this option controls the placement of global variables defined without an initializer, known as tentative definitions in the C standard. Tentative definitions are distinct from declarations of a variable with the "extern" keyword, which do not allocate storage. Unix C compilers have traditionally allocated storage for uninitialized global variables in a common block. This allows the linker to resolve all tentative definitions of the same variable in different compilation units to the same object, or to a non-tentative definition. This is the behavior specified by -fcommon, and is the default for GCC on most targets. On the other hand, this behavior is not required by ISO C, and on some targets may carry a speed or code size penalty on variable references. The -fno-common option specifies that the compiler should instead place uninitialized global variables in the BSS section of the object file. This inhibits the merging of tentative definitions by the linker so you get a multiple- definition error if the same variable is defined in more than one compilation unit. Compiling with -fno-common is useful on targets for which it provides better performance, or if you wish to verify that the program will work on other systems that always treat uninitialized variable definitions this way. -fno-ident Ignore the "#ident" directive. -finhibit-size-directive Don't output a ".size" assembler directive, or anything else that would cause trouble if the function is split in the middle, and the two halves are placed at locations far apart in memory. This option is used when compiling crtstuff.c; you should not need to use it for anything else. -fverbose-asm Put extra commentary information in the generated assembly code to make it more readable. This option is generally only of use to those who actually need to read the generated assembly code (perhaps while debugging the compiler itself). -fno-verbose-asm, the default, causes the extra information to be omitted and is useful when comparing two assembler files. The added comments include: * information on the compiler version and command-line options, * the source code lines associated with the assembly instructions, in the form FILENAME:LINENUMBER:CONTENT OF LINE, * hints on which high-level expressions correspond to the various assembly instruction operands. For example, given this C source file: int test (int n) { int i; int total = 0; for (i = 0; i < n; i++) total += i * i; return total; } compiling to (x86_64) assembly via -S and emitting the result direct to stdout via -o - gcc -S test.c -fverbose-asm -Os -o - gives output similar to this: .file "test.c" # GNU C11 (GCC) version 7.0.0 20160809 (experimental) (x86_64-pc-linux-gnu) [...snip...] # options passed: [...snip...] .text .globl test .type test, @function test: .LFB0: .cfi_startproc # test.c:4: int total = 0; xorl %eax, %eax # <retval> # test.c:6: for (i = 0; i < n; i++) xorl %edx, %edx # i .L2: # test.c:6: for (i = 0; i < n; i++) cmpl %edi, %edx # n, i jge .L5 #, # test.c:7: total += i * i; movl %edx, %ecx # i, tmp92 imull %edx, %ecx # i, tmp92 # test.c:6: for (i = 0; i < n; i++) incl %edx # i # test.c:7: total += i * i; addl %ecx, %eax # tmp92, <retval> jmp .L2 # .L5: # test.c:10: } ret .cfi_endproc .LFE0: .size test, .-test .ident "GCC: (GNU) 7.0.0 20160809 (experimental)" .section .note.GNU-stack,"",@progbits The comments are intended for humans rather than machines and hence the precise format of the comments is subject to change. -frecord-gcc-switches This switch causes the command line used to invoke the compiler to be recorded into the object file that is being created. This switch is only implemented on some targets and the exact format of the recording is target and binary file format dependent, but it usually takes the form of a section containing ASCII text. This switch is related to the -fverbose-asm switch, but that switch only records information in the assembler output file as comments, so it never reaches the object file. See also -grecord-gcc-switches for another way of storing compiler options into the object file. -fpic Generate position-independent code (PIC) suitable for use in a shared library, if supported for the target machine. Such code accesses all constant addresses through a global offset table (GOT). The dynamic loader resolves the GOT entries when the program starts (the dynamic loader is not part of GCC; it is part of the operating system). If the GOT size for the linked executable exceeds a machine-specific maximum size, you get an error message from the linker indicating that -fpic does not work; in that case, recompile with -fPIC instead. (These maximums are 8k on the SPARC, 28k on AArch64 and 32k on the m68k and RS/6000. The x86 has no such limit.) Position-independent code requires special support, and therefore works only on certain machines. For the x86, GCC supports PIC for System V but not for the Sun 386i. Code generated for the IBM RS/6000 is always position-independent. When this flag is set, the macros "__pic__" and "__PIC__" are defined to 1. -fPIC If supported for the target machine, emit position- independent code, suitable for dynamic linking and avoiding any limit on the size of the global offset table. This option makes a difference on AArch64, m68k, PowerPC and SPARC. Position-independent code requires special support, and therefore works only on certain machines. When this flag is set, the macros "__pic__" and "__PIC__" are defined to 2. -fpie -fPIE These options are similar to -fpic and -fPIC, but the generated position-independent code can be only linked into executables. Usually these options are used to compile code that will be linked using the -pie GCC option. -fpie and -fPIE both define the macros "__pie__" and "__PIE__". The macros have the value 1 for -fpie and 2 for -fPIE. -fno-plt Do not use the PLT for external function calls in position- independent code. Instead, load the callee address at call sites from the GOT and branch to it. This leads to more efficient code by eliminating PLT stubs and exposing GOT loads to optimizations. On architectures such as 32-bit x86 where PLT stubs expect the GOT pointer in a specific register, this gives more register allocation freedom to the compiler. Lazy binding requires use of the PLT; with -fno-plt all external symbols are resolved at load time. Alternatively, the function attribute "noplt" can be used to avoid calls through the PLT for specific external functions. In position-dependent code, a few targets also convert calls to functions that are marked to not use the PLT to use the GOT instead. -fno-jump-tables Do not use jump tables for switch statements even where it would be more efficient than other code generation strategies. This option is of use in conjunction with -fpic or -fPIC for building code that forms part of a dynamic linker and cannot reference the address of a jump table. On some targets, jump tables do not require a GOT and this option is not needed. -ffixed-reg Treat the register named reg as a fixed register; generated code should never refer to it (except perhaps as a stack pointer, frame pointer or in some other fixed role). reg must be the name of a register. The register names accepted are machine-specific and are defined in the "REGISTER_NAMES" macro in the machine description macro file. This flag does not have a negative form, because it specifies a three-way choice. -fcall-used-reg Treat the register named reg as an allocable register that is clobbered by function calls. It may be allocated for temporaries or variables that do not live across a call. Functions compiled this way do not save and restore the register reg. It is an error to use this flag with the frame pointer or stack pointer. Use of this flag for other registers that have fixed pervasive roles in the machine's execution model produces disastrous results. This flag does not have a negative form, because it specifies a three-way choice. -fcall-saved-reg Treat the register named reg as an allocable register saved by functions. It may be allocated even for temporaries or variables that live across a call. Functions compiled this way save and restore the register reg if they use it. It is an error to use this flag with the frame pointer or stack pointer. Use of this flag for other registers that have fixed pervasive roles in the machine's execution model produces disastrous results. A different sort of disaster results from the use of this flag for a register in which function values may be returned. This flag does not have a negative form, because it specifies a three-way choice. -fpack-struct[=n] Without a value specified, pack all structure members together without holes. When a value is specified (which must be a small power of two), pack structure members according to this value, representing the maximum alignment (that is, objects with default alignment requirements larger than this are output potentially unaligned at the next fitting location. Warning: the -fpack-struct switch causes GCC to generate code that is not binary compatible with code generated without that switch. Additionally, it makes the code suboptimal. Use it to conform to a non-default application binary interface. -fleading-underscore This option and its counterpart, -fno-leading-underscore, forcibly change the way C symbols are represented in the object file. One use is to help link with legacy assembly code. Warning: the -fleading-underscore switch causes GCC to generate code that is not binary compatible with code generated without that switch. Use it to conform to a non- default application binary interface. Not all targets provide complete support for this switch. -ftls-model=model Alter the thread-local storage model to be used. The model argument should be one of global-dynamic, local-dynamic, initial-exec or local-exec. Note that the choice is subject to optimization: the compiler may use a more efficient model for symbols not visible outside of the translation unit, or if -fpic is not given on the command line. The default without -fpic is initial-exec; with -fpic the default is global-dynamic. -ftrampolines For targets that normally need trampolines for nested functions, always generate them instead of using descriptors. Otherwise, for targets that do not need them, like for example HP-PA or IA-64, do nothing. A trampoline is a small piece of code that is created at run time on the stack when the address of a nested function is taken, and is used to call the nested function indirectly. Therefore, it requires the stack to be made executable in order for the program to work properly. -fno-trampolines is enabled by default on a language by language basis to let the compiler avoid generating them, if it computes that this is safe, and replace them with descriptors. Descriptors are made up of data only, but the generated code must be prepared to deal with them. As of this writing, -fno-trampolines is enabled by default only for Ada. Moreover, code compiled with -ftrampolines and code compiled with -fno-trampolines are not binary compatible if nested functions are present. This option must therefore be used on a program-wide basis and be manipulated with extreme care. -fvisibility=[default|internal|hidden|protected] Set the default ELF image symbol visibility to the specified option---all symbols are marked with this unless overridden within the code. Using this feature can very substantially improve linking and load times of shared object libraries, produce more optimized code, provide near-perfect API export and prevent symbol clashes. It is strongly recommended that you use this in any shared objects you distribute. Despite the nomenclature, default always means public; i.e., available to be linked against from outside the shared object. protected and internal are pretty useless in real- world usage so the only other commonly used option is hidden. The default if -fvisibility isn't specified is default, i.e., make every symbol public. A good explanation of the benefits offered by ensuring ELF symbols have the correct visibility is given by "How To Write Shared Libraries" by Ulrich Drepper (which can be found at <https://www.akkadia.org/drepper/ >)---however a superior solution made possible by this option to marking things hidden when the default is public is to make the default hidden and mark things public. This is the norm with DLLs on Windows and with -fvisibility=hidden and "__attribute__ ((visibility("default")))" instead of "__declspec(dllexport)" you get almost identical semantics with identical syntax. This is a great boon to those working with cross-platform projects. For those adding visibility support to existing code, you may find "#pragma GCC visibility" of use. This works by you enclosing the declarations you wish to set visibility for with (for example) "#pragma GCC visibility push(hidden)" and "#pragma GCC visibility pop". Bear in mind that symbol visibility should be viewed as part of the API interface contract and thus all new code should always specify visibility when it is not the default; i.e., declarations only for use within the local DSO should always be marked explicitly as hidden as so to avoid PLT indirection overheads---making this abundantly clear also aids readability and self-documentation of the code. Note that due to ISO C++ specification requirements, "operator new" and "operator delete" must always be of default visibility. Be aware that headers from outside your project, in particular system headers and headers from any other library you use, may not be expecting to be compiled with visibility other than the default. You may need to explicitly say "#pragma GCC visibility push(default)" before including any such headers. "extern" declarations are not affected by -fvisibility, so a lot of code can be recompiled with -fvisibility=hidden with no modifications. However, this means that calls to "extern" functions with no explicit visibility use the PLT, so it is more effective to use "__attribute ((visibility))" and/or "#pragma GCC visibility" to tell the compiler which "extern" declarations should be treated as hidden. Note that -fvisibility does affect C++ vague linkage entities. This means that, for instance, an exception class that is be thrown between DSOs must be explicitly marked with default visibility so that the type_info nodes are unified between the DSOs. An overview of these techniques, their benefits and how to use them is at <http://gcc.gnu.org/wiki/Visibility >. -fstrict-volatile-bitfields This option should be used if accesses to volatile bit-fields (or other structure fields, although the compiler usually honors those types anyway) should use a single access of the width of the field's type, aligned to a natural alignment if possible. For example, targets with memory-mapped peripheral registers might require all such accesses to be 16 bits wide; with this flag you can declare all peripheral bit-fields as "unsigned short" (assuming short is 16 bits on these targets) to force GCC to use 16-bit accesses instead of, perhaps, a more efficient 32-bit access. If this option is disabled, the compiler uses the most efficient instruction. In the previous example, that might be a 32-bit load instruction, even though that accesses bytes that do not contain any portion of the bit-field, or memory- mapped registers unrelated to the one being updated. In some cases, such as when the "packed" attribute is applied to a structure field, it may not be possible to access the field with a single read or write that is correctly aligned for the target machine. In this case GCC falls back to generating multiple accesses rather than code that will fault or truncate the result at run time. Note: Due to restrictions of the C/C++11 memory model, write accesses are not allowed to touch non bit-field members. It is therefore recommended to define all bits of the field's type as bit-field members. The default value of this option is determined by the application binary interface for the target processor. -fsync-libcalls This option controls whether any out-of-line instance of the "__sync" family of functions may be used to implement the C++11 "__atomic" family of functions. The default value of this option is enabled, thus the only useful form of the option is -fno-sync-libcalls. This option is used in the implementation of the libatomic runtime library. GCC Developer Options This section describes command-line options that are primarily of interest to GCC developers, including options to support compiler testing and investigation of compiler bugs and compile-time performance problems. This includes options that produce debug dumps at various points in the compilation; that print statistics such as memory use and execution time; and that print information about GCC's configuration, such as where it searches for libraries. You should rarely need to use any of these options for ordinary compilation and linking tasks. Many developer options that cause GCC to dump output to a file take an optional =filename suffix. You can specify stdout or - to dump to standard output, and stderr for standard error. If =filename is omitted, a default dump file name is constructed by concatenating the base dump file name, a pass number, phase letter, and pass name. The base dump file name is the name of output file produced by the compiler if explicitly specified and not an executable; otherwise it is the source file name. The pass number is determined by the order passes are registered with the compiler's pass manager. This is generally the same as the order of execution, but passes registered by plugins, target- specific passes, or passes that are otherwise registered late are numbered higher than the pass named final, even if they are executed earlier. The phase letter is one of i (inter-procedural analysis), l (language-specific), r (RTL), or t (tree). The files are created in the directory of the output file. -dletters -fdump-rtl-pass -fdump-rtl-pass=filename Says to make debugging dumps during compilation at times specified by letters. This is used for debugging the RTL- based passes of the compiler. Some -dletters switches have different meaning when -E is used for preprocessing. Debug dumps can be enabled with a -fdump-rtl switch or some -d option letters. Here are the possible letters for use in pass and letters, and their meanings: -fdump-rtl-alignments Dump after branch alignments have been computed. -fdump-rtl-asmcons Dump after fixing rtl statements that have unsatisfied in/out constraints. -fdump-rtl-auto_inc_dec Dump after auto-inc-dec discovery. This pass is only run on architectures that have auto inc or auto dec instructions. -fdump-rtl-barriers Dump after cleaning up the barrier instructions. -fdump-rtl-bbpart Dump after partitioning hot and cold basic blocks. -fdump-rtl-bbro Dump after block reordering. -fdump-rtl-btl1 -fdump-rtl-btl2 -fdump-rtl-btl1 and -fdump-rtl-btl2 enable dumping after the two branch target load optimization passes. -fdump-rtl-bypass Dump after jump bypassing and control flow optimizations. -fdump-rtl-combine Dump after the RTL instruction combination pass. -fdump-rtl-compgotos Dump after duplicating the computed gotos. -fdump-rtl-ce1 -fdump-rtl-ce2 -fdump-rtl-ce3 -fdump-rtl-ce1, -fdump-rtl-ce2, and -fdump-rtl-ce3 enable dumping after the three if conversion passes. -fdump-rtl-cprop_hardreg Dump after hard register copy propagation. -fdump-rtl-csa Dump after combining stack adjustments. -fdump-rtl-cse1 -fdump-rtl-cse2 -fdump-rtl-cse1 and -fdump-rtl-cse2 enable dumping after the two common subexpression elimination passes. -fdump-rtl-dce Dump after the standalone dead code elimination passes. -fdump-rtl-dbr Dump after delayed branch scheduling. -fdump-rtl-dce1 -fdump-rtl-dce2 -fdump-rtl-dce1 and -fdump-rtl-dce2 enable dumping after the two dead store elimination passes. -fdump-rtl-eh Dump after finalization of EH handling code. -fdump-rtl-eh_ranges Dump after conversion of EH handling range regions. -fdump-rtl-expand Dump after RTL generation. -fdump-rtl-fwprop1 -fdump-rtl-fwprop2 -fdump-rtl-fwprop1 and -fdump-rtl-fwprop2 enable dumping after the two forward propagation passes. -fdump-rtl-gcse1 -fdump-rtl-gcse2 -fdump-rtl-gcse1 and -fdump-rtl-gcse2 enable dumping after global common subexpression elimination. -fdump-rtl-init-regs Dump after the initialization of the registers. -fdump-rtl-initvals Dump after the computation of the initial value sets. -fdump-rtl-into_cfglayout Dump after converting to cfglayout mode. -fdump-rtl-ira Dump after iterated register allocation. -fdump-rtl-jump Dump after the second jump optimization. -fdump-rtl-loop2 -fdump-rtl-loop2 enables dumping after the rtl loop optimization passes. -fdump-rtl-mach Dump after performing the machine dependent reorganization pass, if that pass exists. -fdump-rtl-mode_sw Dump after removing redundant mode switches. -fdump-rtl-rnreg Dump after register renumbering. -fdump-rtl-outof_cfglayout Dump after converting from cfglayout mode. -fdump-rtl-peephole2 Dump after the peephole pass. -fdump-rtl-postreload Dump after post-reload optimizations. -fdump-rtl-pro_and_epilogue Dump after generating the function prologues and epilogues. -fdump-rtl-sched1 -fdump-rtl-sched2 -fdump-rtl-sched1 and -fdump-rtl-sched2 enable dumping after the basic block scheduling passes. -fdump-rtl-ree Dump after sign/zero extension elimination. -fdump-rtl-seqabstr Dump after common sequence discovery. -fdump-rtl-shorten Dump after shortening branches. -fdump-rtl-sibling Dump after sibling call optimizations. -fdump-rtl-split1 -fdump-rtl-split2 -fdump-rtl-split3 -fdump-rtl-split4 -fdump-rtl-split5 These options enable dumping after five rounds of instruction splitting. -fdump-rtl-sms Dump after modulo scheduling. This pass is only run on some architectures. -fdump-rtl-stack Dump after conversion from GCC's "flat register file" registers to the x87's stack-like registers. This pass is only run on x86 variants. -fdump-rtl-subreg1 -fdump-rtl-subreg2 -fdump-rtl-subreg1 and -fdump-rtl-subreg2 enable dumping after the two subreg expansion passes. -fdump-rtl-unshare Dump after all rtl has been unshared. -fdump-rtl-vartrack Dump after variable tracking. -fdump-rtl-vregs Dump after converting virtual registers to hard registers. -fdump-rtl-web Dump after live range splitting. -fdump-rtl-regclass -fdump-rtl-subregs_of_mode_init -fdump-rtl-subregs_of_mode_finish -fdump-rtl-dfinit -fdump-rtl-dfinish These dumps are defined but always produce empty files. -da -fdump-rtl-all Produce all the dumps listed above. -dA Annotate the assembler output with miscellaneous debugging information. -dD Dump all macro definitions, at the end of preprocessing, in addition to normal output. -dH Produce a core dump whenever an error occurs. -dp Annotate the assembler output with a comment indicating which pattern and alternative is used. The length and cost of each instruction are also printed. -dP Dump the RTL in the assembler output as a comment before each instruction. Also turns on -dp annotation. -dx Just generate RTL for a function instead of compiling it. Usually used with -fdump-rtl-expand. -fdump-debug Dump debugging information generated during the debug generation phase. -fdump-earlydebug Dump debugging information generated during the early debug generation phase. -fdump-noaddr When doing debugging dumps, suppress address output. This makes it more feasible to use diff on debugging dumps for compiler invocations with different compiler binaries and/or different text / bss / data / heap / stack / dso start locations. -freport-bug Collect and dump debug information into a temporary file if an internal compiler error (ICE) occurs. -fdump-unnumbered When doing debugging dumps, suppress instruction numbers and address output. This makes it more feasible to use diff on debugging dumps for compiler invocations with different options, in particular with and without -g. -fdump-unnumbered-links When doing debugging dumps (see -d option above), suppress instruction numbers for the links to the previous and next instructions in a sequence. -fdump-ipa-switch -fdump-ipa-switch-options Control the dumping at various stages of inter-procedural analysis language tree to a file. The file name is generated by appending a switch specific suffix to the source file name, and the file is created in the same directory as the output file. The following dumps are possible: all Enables all inter-procedural analysis dumps. cgraph Dumps information about call-graph optimization, unused function removal, and inlining decisions. inline Dump after function inlining. Additionally, the options -optimized, -missed, -note, and -all can be provided, with the same meaning as for -fopt-info, defaulting to -optimized. For example, -fdump-ipa-inline-optimized-missed will emit information on callsites that were inlined, along with callsites that were not inlined. By default, the dump will contain messages about successful optimizations (equivalent to -optimized) together with low- level details about the analysis. -fdump-lang-all -fdump-lang-switch -fdump-lang-switch-options -fdump-lang-switch-options=filename Control the dumping of language-specific information. The options and filename portions behave as described in the -fdump-tree option. The following switch values are accepted: all Enable all language-specific dumps. class Dump class hierarchy information. Virtual table information is emitted unless 'slim' is specified. This option is applicable to C++ only. raw Dump the raw internal tree data. This option is applicable to C++ only. -fdump-passes Print on stderr the list of optimization passes that are turned on and off by the current command-line options. -fdump-statistics-option Enable and control dumping of pass statistics in a separate file. The file name is generated by appending a suffix ending in .statistics to the source file name, and the file is created in the same directory as the output file. If the -option form is used, -stats causes counters to be summed over the whole compilation unit while -details dumps every event as the passes generate them. The default with no option is to sum counters for each function compiled. -fdump-tree-all -fdump-tree-switch -fdump-tree-switch-options -fdump-tree-switch-options=filename Control the dumping at various stages of processing the intermediate language tree to a file. If the -options form is used, options is a list of - separated options which control the details of the dump. Not all options are applicable to all dumps; those that are not meaningful are ignored. The following options are available address Print the address of each node. Usually this is not meaningful as it changes according to the environment and source file. Its primary use is for tying up a dump file with a debug environment. asmname If "DECL_ASSEMBLER_NAME" has been set for a given decl, use that in the dump instead of "DECL_NAME". Its primary use is ease of use working backward from mangled names in the assembly file. slim When dumping front-end intermediate representations, inhibit dumping of members of a scope or body of a function merely because that scope has been reached. Only dump such items when they are directly reachable by some other path. When dumping pretty-printed trees, this option inhibits dumping the bodies of control structures. When dumping RTL, print the RTL in slim (condensed) form instead of the default LISP-like representation. raw Print a raw representation of the tree. By default, trees are pretty-printed into a C-like representation. details Enable more detailed dumps (not honored by every dump option). Also include information from the optimization passes. stats Enable dumping various statistics about the pass (not honored by every dump option). blocks Enable showing basic block boundaries (disabled in raw dumps). graph For each of the other indicated dump files (-fdump-rtl-pass), dump a representation of the control flow graph suitable for viewing with GraphViz to file.passid.pass.dot. Each function in the file is pretty-printed as a subgraph, so that GraphViz can render them all in a single plot. This option currently only works for RTL dumps, and the RTL is always dumped in slim form. vops Enable showing virtual operands for every statement. lineno Enable showing line numbers for statements. uid Enable showing the unique ID ("DECL_UID") for each variable. verbose Enable showing the tree dump for each statement. eh Enable showing the EH region number holding each statement. scev Enable showing scalar evolution analysis details. optimized Enable showing optimization information (only available in certain passes). missed Enable showing missed optimization information (only available in certain passes). note Enable other detailed optimization information (only available in certain passes). all Turn on all options, except raw, slim, verbose and lineno. optall Turn on all optimization options, i.e., optimized, missed, and note. To determine what tree dumps are available or find the dump for a pass of interest follow the steps below. 1. Invoke GCC with -fdump-passes and in the stderr output look for a code that corresponds to the pass you are interested in. For example, the codes "tree-evrp", "tree-vrp1", and "tree-vrp2" correspond to the three Value Range Propagation passes. The number at the end distinguishes distinct invocations of the same pass. 2. To enable the creation of the dump file, append the pass code to the -fdump- option prefix and invoke GCC with it. For example, to enable the dump from the Early Value Range Propagation pass, invoke GCC with the -fdump-tree-evrp option. Optionally, you may specify the name of the dump file. If you don't specify one, GCC creates as described below. 3. Find the pass dump in a file whose name is composed of three components separated by a period: the name of the source file GCC was invoked to compile, a numeric suffix indicating the pass number followed by the letter t for tree passes (and the letter r for RTL passes), and finally the pass code. For example, the Early VRP pass dump might be in a file named myfile.c.038t.evrp in the current working directory. Note that the numeric codes are not stable and may change from one version of GCC to another. -fopt-info -fopt-info-options -fopt-info-options=filename Controls optimization dumps from various optimization passes. If the -options form is used, options is a list of - separated option keywords to select the dump details and optimizations. The options can be divided into three groups: 1. options describing what kinds of messages should be emitted, 2. options describing the verbosity of the dump, and 3. options describing which optimizations should be included. The options from each group can be freely mixed as they are non-overlapping. However, in case of any conflicts, the later options override the earlier options on the command line. The following options control which kinds of messages should be emitted: optimized Print information when an optimization is successfully applied. It is up to a pass to decide which information is relevant. For example, the vectorizer passes print the source location of loops which are successfully vectorized. missed Print information about missed optimizations. Individual passes control which information to include in the output. note Print verbose information about optimizations, such as certain transformations, more detailed messages about decisions etc. all Print detailed optimization information. This includes optimized, missed, and note. The following option controls the dump verbosity: internals By default, only "high-level" messages are emitted. This option enables additional, more detailed, messages, which are likely to only be of interest to GCC developers. One or more of the following option keywords can be used to describe a group of optimizations: ipa Enable dumps from all interprocedural optimizations. loop Enable dumps from all loop optimizations. inline Enable dumps from all inlining optimizations. omp Enable dumps from all OMP (Offloading and Multi Processing) optimizations. vec Enable dumps from all vectorization optimizations. optall Enable dumps from all optimizations. This is a superset of the optimization groups listed above. If options is omitted, it defaults to optimized-optall, which means to dump messages about successful optimizations from all the passes, omitting messages that are treated as "internals". If the filename is provided, then the dumps from all the applicable optimizations are concatenated into the filename. Otherwise the dump is output onto stderr. Though multiple -fopt-info options are accepted, only one of them can include a filename. If other filenames are provided then all but the first such option are ignored. Note that the output filename is overwritten in case of multiple translation units. If a combined output from multiple translation units is desired, stderr should be used instead. In the following example, the optimization info is output to stderr: gcc -O3 -fopt-info This example: gcc -O3 -fopt-info-missed=missed.all outputs missed optimization report from all the passes into missed.all, and this one: gcc -O2 -ftree-vectorize -fopt-info-vec-missed prints information about missed optimization opportunities from vectorization passes on stderr. Note that -fopt-info-vec-missed is equivalent to -fopt-info-missed-vec. The order of the optimization group names and message types listed after -fopt-info does not matter. As another example, gcc -O3 -fopt-info-inline-optimized-missed=inline.txt outputs information about missed optimizations as well as optimized locations from all the inlining passes into inline.txt. Finally, consider: gcc -fopt-info-vec-missed=vec.miss -fopt-info-loop-optimized=loop.opt Here the two output filenames vec.miss and loop.opt are in conflict since only one output file is allowed. In this case, only the first option takes effect and the subsequent options are ignored. Thus only vec.miss is produced which contains dumps from the vectorizer about missed opportunities. -fsave-optimization-record Write a SRCFILE.opt-record.json.gz file detailing what optimizations were performed, for those optimizations that support -fopt-info. This option is experimental and the format of the data within the compressed JSON file is subject to change. It is roughly equivalent to a machine-readable version of -fopt-info-all, as a collection of messages with source file, line number and column number, with the following additional data for each message: * the execution count of the code being optimized, along with metadata about whether this was from actual profile data, or just an estimate, allowing consumers to prioritize messages by code hotness, * the function name of the code being optimized, where applicable, * the "inlining chain" for the code being optimized, so that when a function is inlined into several different places (which might themselves be inlined), the reader can distinguish between the copies, * objects identifying those parts of the message that refer to expressions, statements or symbol-table nodes, which of these categories they are, and, when available, their source code location, * the GCC pass that emitted the message, and * the location in GCC's own code from which the message was emitted Additionally, some messages are logically nested within other messages, reflecting implementation details of the optimization passes. -fsched-verbose=n On targets that use instruction scheduling, this option controls the amount of debugging output the scheduler prints to the dump files. For n greater than zero, -fsched-verbose outputs the same information as -fdump-rtl-sched1 and -fdump-rtl-sched2. For n greater than one, it also output basic block probabilities, detailed ready list information and unit/insn info. For n greater than two, it includes RTL at abort point, control- flow and regions info. And for n over four, -fsched-verbose also includes dependence info. -fenable-kind-pass -fdisable-kind-pass=range-list This is a set of options that are used to explicitly disable/enable optimization passes. These options are intended for use for debugging GCC. Compiler users should use regular options for enabling/disabling passes instead. -fdisable-ipa-pass Disable IPA pass pass. pass is the pass name. If the same pass is statically invoked in the compiler multiple times, the pass name should be appended with a sequential number starting from 1. -fdisable-rtl-pass -fdisable-rtl-pass=range-list Disable RTL pass pass. pass is the pass name. If the same pass is statically invoked in the compiler multiple times, the pass name should be appended with a sequential number starting from 1. range-list is a comma-separated list of function ranges or assembler names. Each range is a number pair separated by a colon. The range is inclusive in both ends. If the range is trivial, the number pair can be simplified as a single number. If the function's call graph node's uid falls within one of the specified ranges, the pass is disabled for that function. The uid is shown in the function header of a dump file, and the pass names can be dumped by using option -fdump-passes. -fdisable-tree-pass -fdisable-tree-pass=range-list Disable tree pass pass. See -fdisable-rtl for the description of option arguments. -fenable-ipa-pass Enable IPA pass pass. pass is the pass name. If the same pass is statically invoked in the compiler multiple times, the pass name should be appended with a sequential number starting from 1. -fenable-rtl-pass -fenable-rtl-pass=range-list Enable RTL pass pass. See -fdisable-rtl for option argument description and examples. -fenable-tree-pass -fenable-tree-pass=range-list Enable tree pass pass. See -fdisable-rtl for the description of option arguments. Here are some examples showing uses of these options. # disable ccp1 for all functions -fdisable-tree-ccp1 # disable complete unroll for function whose cgraph node uid is 1 -fenable-tree-cunroll=1 # disable gcse2 for functions at the following ranges [1,1], # [300,400], and [400,1000] # disable gcse2 for functions foo and foo2 -fdisable-rtl-gcse2=foo,foo2 # disable early inlining -fdisable-tree-einline # disable ipa inlining -fdisable-ipa-inline # enable tree full unroll -fenable-tree-unroll -fchecking -fchecking=n Enable internal consistency checking. The default depends on the compiler configuration. -fchecking=2 enables further internal consistency checking that might affect code generation. -frandom-seed=string This option provides a seed that GCC uses in place of random numbers in generating certain symbol names that have to be different in every compiled file. It is also used to place unique stamps in coverage data files and the object files that produce them. You can use the -frandom-seed option to produce reproducibly identical object files. The string can either be a number (decimal, octal or hex) or an arbitrary string (in which case it's converted to a number by computing CRC32). The string should be different for every file you compile. -save-temps -save-temps=cwd Store the usual "temporary" intermediate files permanently; place them in the current directory and name them based on the source file. Thus, compiling foo.c with -c -save-temps produces files foo.i and foo.s, as well as foo.o. This creates a preprocessed foo.i output file even though the compiler now normally uses an integrated preprocessor. When used in combination with the -x command-line option, -save-temps is sensible enough to avoid over writing an input source file with the same extension as an intermediate file. The corresponding intermediate file may be obtained by renaming the source file before using -save-temps. If you invoke GCC in parallel, compiling several different source files that share a common base name in different subdirectories or the same source file compiled for multiple output destinations, it is likely that the different parallel compilers will interfere with each other, and overwrite the temporary files. For instance: gcc -save-temps -o outdir1/foo.o indir1/foo.c& gcc -save-temps -o outdir2/foo.o indir2/foo.c& may result in foo.i and foo.o being written to simultaneously by both compilers. -save-temps=obj Store the usual "temporary" intermediate files permanently. If the -o option is used, the temporary files are based on the object file. If the -o option is not used, the -save-temps=obj switch behaves like -save-temps. For example: gcc -save-temps=obj -c foo.c gcc -save-temps=obj -c bar.c -o dir/xbar.o gcc -save-temps=obj foobar.c -o dir2/yfoobar creates foo.i, foo.s, dir/xbar.i, dir/xbar.s, dir2/yfoobar.i, dir2/yfoobar.s, and dir2/yfoobar.o. -time[=file] Report the CPU time taken by each subprocess in the compilation sequence. For C source files, this is the compiler proper and assembler (plus the linker if linking is done). Without the specification of an output file, the output looks like this: # cc1 0.12 0.01 # as 0.00 0.01 The first number on each line is the "user time", that is time spent executing the program itself. The second number is "system time", time spent executing operating system routines on behalf of the program. Both numbers are in seconds. With the specification of an output file, the output is appended to the named file, and it looks like this: 0.12 0.01 cc1 <options> 0.00 0.01 as <options> The "user time" and the "system time" are moved before the program name, and the options passed to the program are displayed, so that one can later tell what file was being compiled, and with which options. -fdump-final-insns[=file] Dump the final internal representation (RTL) to file. If the optional argument is omitted (or if file is "."), the name of the dump file is determined by appending ".gkd" to the compilation output file name. -fcompare-debug[=opts] If no error occurs during compilation, run the compiler a second time, adding opts and -fcompare-debug-second to the arguments passed to the second compilation. Dump the final internal representation in both compilations, and print an error if they differ. If the equal sign is omitted, the default -gtoggle is used. The environment variable GCC_COMPARE_DEBUG, if defined, non- empty and nonzero, implicitly enables -fcompare-debug. If GCC_COMPARE_DEBUG is defined to a string starting with a dash, then it is used for opts, otherwise the default -gtoggle is used. -fcompare-debug=, with the equal sign but without opts, is equivalent to -fno-compare-debug, which disables the dumping of the final representation and the second compilation, preventing even GCC_COMPARE_DEBUG from taking effect. To verify full coverage during -fcompare-debug testing, set GCC_COMPARE_DEBUG to say -fcompare-debug-not-overridden, which GCC rejects as an invalid option in any actual compilation (rather than preprocessing, assembly or linking). To get just a warning, setting GCC_COMPARE_DEBUG to -w%n-fcompare-debug not overridden will do. -fcompare-debug-second This option is implicitly passed to the compiler for the second compilation requested by -fcompare-debug, along with options to silence warnings, and omitting other options that would cause the compiler to produce output to files or to standard output as a side effect. Dump files and preserved temporary files are renamed so as to contain the ".gk" additional extension during the second compilation, to avoid overwriting those generated by the first. When this option is passed to the compiler driver, it causes the first compilation to be skipped, which makes it useful for little other than debugging the compiler proper. -gtoggle Turn off generation of debug info, if leaving out this option generates it, or turn it on at level 2 otherwise. The position of this argument in the command line does not matter; it takes effect after all other options are processed, and it does so only once, no matter how many times it is given. This is mainly intended to be used with -fcompare-debug. -fvar-tracking-assignments-toggle Toggle -fvar-tracking-assignments, in the same way that -gtoggle toggles -g. -Q Makes the compiler print out each function name as it is compiled, and print some statistics about each pass when it finishes. -ftime-report Makes the compiler print some statistics about the time consumed by each pass when it finishes. -ftime-report-details Record the time consumed by infrastructure parts separately for each pass. -fira-verbose=n Control the verbosity of the dump file for the integrated register allocator. The default value is 5. If the value n is greater or equal to 10, the dump output is sent to stderr using the same format as n minus 10. -flto-report Prints a report with internal details on the workings of the link-time optimizer. The contents of this report vary from version to version. It is meant to be useful to GCC developers when processing object files in LTO mode (via -flto). Disabled by default. -flto-report-wpa Like -flto-report, but only print for the WPA phase of Link Time Optimization. -fmem-report Makes the compiler print some statistics about permanent memory allocation when it finishes. -fmem-report-wpa Makes the compiler print some statistics about permanent memory allocation for the WPA phase only. -fpre-ipa-mem-report -fpost-ipa-mem-report Makes the compiler print some statistics about permanent memory allocation before or after interprocedural optimization. -fprofile-report Makes the compiler print some statistics about consistency of the (estimated) profile and effect of individual passes. -fstack-usage Makes the compiler output stack usage information for the program, on a per-function basis. The filename for the dump is made by appending .su to the auxname. auxname is generated from the name of the output file, if explicitly specified and it is not an executable, otherwise it is the basename of the source file. An entry is made up of three fields: * The name of the function. * A number of bytes. * One or more qualifiers: "static", "dynamic", "bounded". The qualifier "static" means that the function manipulates the stack statically: a fixed number of bytes are allocated for the frame on function entry and released on function exit; no stack adjustments are otherwise made in the function. The second field is this fixed number of bytes. The qualifier "dynamic" means that the function manipulates the stack dynamically: in addition to the static allocation described above, stack adjustments are made in the body of the function, for example to push/pop arguments around function calls. If the qualifier "bounded" is also present, the amount of these adjustments is bounded at compile time and the second field is an upper bound of the total amount of stack used by the function. If it is not present, the amount of these adjustments is not bounded at compile time and the second field only represents the bounded part. -fstats Emit statistics about front-end processing at the end of the compilation. This option is supported only by the C++ front end, and the information is generally only useful to the G++ development team. -fdbg-cnt-list Print the name and the counter upper bound for all debug counters. -fdbg-cnt=counter-value-list Set the internal debug counter lower and upper bound. counter-value-list is a comma-separated list of name:lower_bound:upper_bound tuples which sets the lower and the upper bound of each debug counter name. The lower_bound is optional and is zero initialized if not set. All debug counters have the initial upper bound of "UINT_MAX"; thus "dbg_cnt" returns true always unless the upper bound is set by this option. For example, with -fdbg-cnt=dce:2:4,tail_call:10, "dbg_cnt(dce)" returns true only for third and fourth invocation. For "dbg_cnt(tail_call)" true is returned for first 10 invocations. -print-file-name=library Print the full absolute name of the library file library that would be used when linking---and don't do anything else. With this option, GCC does not compile or link anything; it just prints the file name. -print-multi-directory Print the directory name corresponding to the multilib selected by any other switches present in the command line. This directory is supposed to exist in GCC_EXEC_PREFIX. -print-multi-lib Print the mapping from multilib directory names to compiler switches that enable them. The directory name is separated from the switches by ;, and each switch starts with an @ instead of the -, without spaces between multiple switches. This is supposed to ease shell processing. -print-multi-os-directory Print the path to OS libraries for the selected multilib, relative to some lib subdirectory. If OS libraries are present in the lib subdirectory and no multilibs are used, this is usually just ., if OS libraries are present in libsuffix sibling directories this prints e.g. ../lib64, ../lib or ../lib32, or if OS libraries are present in lib/subdir subdirectories it prints e.g. amd64, sparcv9 or ev6. -print-multiarch Print the path to OS libraries for the selected multiarch, relative to some lib subdirectory. -print-prog-name=program Like -print-file-name, but searches for a program such as cpp. -print-libgcc-file-name Same as -print-file-name=libgcc.a. This is useful when you use -nostdlib or -nodefaultlibs but you do want to link with libgcc.a. You can do: gcc -nostdlib <files>... `gcc -print-libgcc-file-name` -print-search-dirs Print the name of the configured installation directory and a list of program and library directories gcc searches---and don't do anything else. This is useful when gcc prints the error message installation problem, cannot exec cpp0: No such file or directory. To resolve this you either need to put cpp0 and the other compiler components where gcc expects to find them, or you can set the environment variable GCC_EXEC_PREFIX to the directory where you installed them. Don't forget the trailing /. -print-sysroot Print the target sysroot directory that is used during compilation. This is the target sysroot specified either at configure time or using the --sysroot option, possibly with an extra suffix that depends on compilation options. If no target sysroot is specified, the option prints nothing. -print-sysroot-headers-suffix Print the suffix added to the target sysroot when searching for headers, or give an error if the compiler is not configured with such a suffix---and don't do anything else. -dumpmachine Print the compiler's target machine (for example, i686-pc-linux-gnu)---and don't do anything else. -dumpversion Print the compiler version (for example, 3.0, 6.3.0 or 7)---and don't do anything else. This is the compiler version used in filesystem paths and specs. Depending on how the compiler has been configured it can be just a single number (major version), two numbers separated by a dot (major and minor version) or three numbers separated by dots (major, minor and patchlevel version). -dumpfullversion Print the full compiler version---and don't do anything else. The output is always three numbers separated by dots, major, minor and patchlevel version. -dumpspecs Print the compiler's built-in specs---and don't do anything else. (This is used when GCC itself is being built.) Machine-Dependent Options Each target machine supported by GCC can have its own options---for example, to allow you to compile for a particular processor variant or ABI, or to control optimizations specific to that machine. By convention, the names of machine-specific options start with -m. Some configurations of the compiler also support additional target-specific options, usually for compatibility with other compilers on the same platform. AArch64 Options These options are defined for AArch64 implementations: -mabi=name Generate code for the specified data model. Permissible values are ilp32 for SysV-like data model where int, long int and pointers are 32 bits, and lp64 for SysV-like data model where int is 32 bits, but long int and pointers are 64 bits. The default depends on the specific target configuration. Note that the LP64 and ILP32 ABIs are not link-compatible; you must compile your entire program with the same ABI, and link with a compatible set of libraries. -mbig-endian Generate big-endian code. This is the default when GCC is configured for an aarch64_be-*-* target. -mgeneral-regs-only Generate code which uses only the general-purpose registers. This will prevent the compiler from using floating-point and Advanced SIMD registers but will not impose any restrictions on the assembler. -mlittle-endian Generate little-endian code. This is the default when GCC is configured for an aarch64-*-* but not an aarch64_be-*-* target. -mcmodel=tiny Generate code for the tiny code model. The program and its statically defined symbols must be within 1MB of each other. Programs can be statically or dynamically linked. -mcmodel=small Generate code for the small code model. The program and its statically defined symbols must be within 4GB of each other. Programs can be statically or dynamically linked. This is the default code model. -mcmodel=large Generate code for the large code model. This makes no assumptions about addresses and sizes of sections. Programs can be statically linked only. -mstrict-align -mno-strict-align Avoid or allow generating memory accesses that may not be aligned on a natural object boundary as described in the architecture specification. -momit-leaf-frame-pointer -mno-omit-leaf-frame-pointer Omit or keep the frame pointer in leaf functions. The former behavior is the default. -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset Generate stack protection code using canary at guard. Supported locations are global for a global canary or sysreg for a canary in an appropriate system register. With the latter choice the options -mstack-protector-guard-reg=reg and -mstack-protector-guard-offset=offset furthermore specify which system register to use as base register for reading the canary, and from what offset from that base register. There is no default register or offset as this is entirely for use within the Linux kernel. -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset Generate stack protection code using canary at guard. Supported locations are global for a global canary or sysreg for a canary in an appropriate system register. With the latter choice the options -mstack-protector-guard-reg=reg and -mstack-protector-guard-offset=offset furthermore specify which system register to use as base register for reading the canary, and from what offset from that base register. There is no default register or offset as this is entirely for use within the Linux kernel. -mtls-dialect=desc Use TLS descriptors as the thread-local storage mechanism for dynamic accesses of TLS variables. This is the default. -mtls-dialect=traditional Use traditional TLS as the thread-local storage mechanism for dynamic accesses of TLS variables. -mtls-size=size Specify bit size of immediate TLS offsets. Valid values are 12, 24, 32, 48. This option requires binutils 2.26 or newer. -mfix-cortex-a53-835769 -mno-fix-cortex-a53-835769 Enable or disable the workaround for the ARM Cortex-A53 erratum number 835769. This involves inserting a NOP instruction between memory instructions and 64-bit integer multiply-accumulate instructions. -mfix-cortex-a53-843419 -mno-fix-cortex-a53-843419 Enable or disable the workaround for the ARM Cortex-A53 erratum number 843419. This erratum workaround is made at link time and this will only pass the corresponding flag to the linker. -mlow-precision-recip-sqrt -mno-low-precision-recip-sqrt Enable or disable the reciprocal square root approximation. This option only has an effect if -ffast-math or -funsafe-math-optimizations is used as well. Enabling this reduces precision of reciprocal square root results to about 16 bits for single precision and to 32 bits for double precision. -mlow-precision-sqrt -mno-low-precision-sqrt Enable or disable the square root approximation. This option only has an effect if -ffast-math or -funsafe-math-optimizations is used as well. Enabling this reduces precision of square root results to about 16 bits for single precision and to 32 bits for double precision. If enabled, it implies -mlow-precision-recip-sqrt. -mlow-precision-div -mno-low-precision-div Enable or disable the division approximation. This option only has an effect if -ffast-math or -funsafe-math-optimizations is used as well. Enabling this reduces precision of division results to about 16 bits for single precision and to 32 bits for double precision. -mtrack-speculation -mno-track-speculation Enable or disable generation of additional code to track speculative execution through conditional branches. The tracking state can then be used by the compiler when expanding calls to "__builtin_speculation_safe_copy" to permit a more efficient code sequence to be generated. -moutline-atomics -mno-outline-atomics Enable or disable calls to out-of-line helpers to implement atomic operations. These helpers will, at runtime, determine if the LSE instructions from ARMv8.1-A can be used; if not, they will use the load/store-exclusive instructions that are present in the base ARMv8.0 ISA. This option is only applicable when compiling for the base ARMv8.0 instruction set. If using a later revision, e.g. -march=armv8.1-a or -march=armv8-a+lse, the ARMv8.1-Atomics instructions will be used directly. The same applies when using -mcpu= when the selected cpu supports the lse feature. -march=name Specify the name of the target architecture and, optionally, one or more feature modifiers. This option has the form -march=arch{+[no]feature}*. The permissible values for arch are armv8-a, armv8.1-a, armv8.2-a, armv8.3-a, armv8.4-a, armv8.5-a or native. The value armv8.5-a implies armv8.4-a and enables compiler support for the ARMv8.5-A architecture extensions. The value armv8.4-a implies armv8.3-a and enables compiler support for the ARMv8.4-A architecture extensions. The value armv8.3-a implies armv8.2-a and enables compiler support for the ARMv8.3-A architecture extensions. The value armv8.2-a implies armv8.1-a and enables compiler support for the ARMv8.2-A architecture extensions. The value armv8.1-a implies armv8-a and enables compiler support for the ARMv8.1-A architecture extension. In particular, it enables the +crc, +lse, and +rdma features. The value native is available on native AArch64 GNU/Linux and causes the compiler to pick the architecture of the host system. This option has no effect if the compiler is unable to recognize the architecture of the host system, The permissible values for feature are listed in the sub- section on aarch64-feature-modifiers,,-march and -mcpu Feature Modifiers. Where conflicting feature modifiers are specified, the right-most feature is used. GCC uses name to determine what kind of instructions it can emit when generating assembly code. If -march is specified without either of -mtune or -mcpu also being specified, the code is tuned to perform well across a range of target processors implementing the target architecture. -mtune=name Specify the name of the target processor for which GCC should tune the performance of the code. Permissible values for this option are: generic, cortex-a35, cortex-a53, cortex-a55, cortex-a57, cortex-a72, cortex-a73, cortex-a75, cortex-a76, ares, exynos-m1, emag, falkor, neoverse-e1, neoverse-n1, neoverse-n2, neoverse-v1, neoverse-512tvb, qdf24xx, saphira, phecda, xgene1, vulcan, octeontx, octeontx81, octeontx83, a64fx, thunderx, thunderxt88, thunderxt88p1, thunderxt81, tsv110, thunderxt83, thunderx2t99, zeus, cortex-a57.cortex-a53, cortex-a72.cortex-a53, cortex-a73.cortex-a35, cortex-a73.cortex-a53, cortex-a75.cortex-a55, cortex-a76.cortex-a55 native. The values cortex-a57.cortex-a53, cortex-a72.cortex-a53, cortex-a73.cortex-a35, cortex-a73.cortex-a53, cortex-a75.cortex-a55, cortex-a76.cortex-a55 specify that GCC should tune for a big.LITTLE system. The value neoverse-512tvb specifies that GCC should tune for Neoverse cores that (a) implement SVE and (b) have a total vector bandwidth of 512 bits per cycle. In other words, the option tells GCC to tune for Neoverse cores that can execute 4 128-bit Advanced SIMD arithmetic instructions a cycle and that can execute an equivalent number of SVE arithmetic instructions per cycle (2 for 256-bit SVE, 4 for 128-bit SVE). This is more general than tuning for a specific core like Neoverse V1 but is more specific than the default tuning described below. Additionally on native AArch64 GNU/Linux systems the value native tunes performance to the host system. This option has no effect if the compiler is unable to recognize the processor of the host system. Where none of -mtune=, -mcpu= or -march= are specified, the code is tuned to perform well across a range of target processors. This option cannot be suffixed by feature modifiers. -mcpu=name Specify the name of the target processor, optionally suffixed by one or more feature modifiers. This option has the form -mcpu=cpu{+[no]feature}*, where the permissible values for cpu are the same as those available for -mtune. The permissible values for feature are documented in the sub- section on aarch64-feature-modifiers,,-march and -mcpu Feature Modifiers. Where conflicting feature modifiers are specified, the right-most feature is used. GCC uses name to determine what kind of instructions it can emit when generating assembly code (as if by -march) and to determine the target processor for which to tune for performance (as if by -mtune). Where this option is used in conjunction with -march or -mtune, those options take precedence over the appropriate part of this option. -mcpu=neoverse-512tvb is special in that it does not refer to a specific core, but instead refers to all Neoverse cores that (a) implement SVE and (b) have a total vector bandwidth of 512 bits a cycle. Unless overridden by -march, -mcpu=neoverse-512tvb generates code that can run on a Neoverse V1 core, since Neoverse V1 is the first Neoverse core with these properties. Unless overridden by -mtune, -mcpu=neoverse-512tvb tunes code in the same way as for -mtune=neoverse-512tvb. -moverride=string Override tuning decisions made by the back-end in response to a -mtune= switch. The syntax, semantics, and accepted values for string in this option are not guaranteed to be consistent across releases. This option is only intended to be useful when developing GCC. -mverbose-cost-dump Enable verbose cost model dumping in the debug dump files. This option is provided for use in debugging the compiler. -mpc-relative-literal-loads -mno-pc-relative-literal-loads Enable or disable PC-relative literal loads. With this option literal pools are accessed using a single instruction and emitted after each function. This limits the maximum size of functions to 1MB. This is enabled by default for -mcmodel=tiny. -msign-return-address=scope Select the function scope on which return address signing will be applied. Permissible values are none, which disables return address signing, non-leaf, which enables pointer signing for functions which are not leaf functions, and all, which enables pointer signing for all functions. The default value is none. This option has been deprecated by -mbranch-protection. -mbranch-protection=none|standard|pac-ret[+leaf]|bti Select the branch protection features to use. none is the default and turns off all types of branch protection. standard turns on all types of branch protection features. If a feature has additional tuning options, then standard sets it to its standard level. pac-ret[+leaf] turns on return address signing to its standard level: signing functions that save the return address to memory (non-leaf functions will practically always do this) using the a-key. The optional argument leaf can be used to extend the signing to include leaf functions. bti turns on branch target identification mechanism. -mharden-sls=opts Enable compiler hardening against straight line speculation (SLS). opts is a comma-separated list of the following options: retbr blr In addition, -mharden-sls=all enables all SLS hardening while -mharden-sls=none disables all SLS hardening. -msve-vector-bits=bits Specify the number of bits in an SVE vector register. This option only has an effect when SVE is enabled. GCC supports two forms of SVE code generation: "vector-length agnostic" output that works with any size of vector register and "vector-length specific" output that allows GCC to make assumptions about the vector length when it is useful for optimization reasons. The possible values of bits are: scalable, 128, 256, 512, 1024 and 2048. Specifying scalable selects vector-length agnostic output. At present -msve-vector-bits=128 also generates vector-length agnostic output. All other values generate vector-length specific code. The behavior of these values may change in future releases and no value except scalable should be relied on for producing code that is portable across different hardware SVE vector lengths. The default is -msve-vector-bits=scalable, which produces vector-length agnostic code. -march and -mcpu Feature Modifiers Feature modifiers used with -march and -mcpu can be any of the following and their inverses nofeature: crc Enable CRC extension. This is on by default for -march=armv8.1-a. crypto Enable Crypto extension. This also enables Advanced SIMD and floating-point instructions. fp Enable floating-point instructions. This is on by default for all possible values for options -march and -mcpu. simd Enable Advanced SIMD instructions. This also enables floating-point instructions. This is on by default for all possible values for options -march and -mcpu. sve Enable Scalable Vector Extension instructions. This also enables Advanced SIMD and floating-point instructions. lse Enable Large System Extension instructions. This is on by default for -march=armv8.1-a. rdma Enable Round Double Multiply Accumulate instructions. This is on by default for -march=armv8.1-a. fp16 Enable FP16 extension. This also enables floating-point instructions. fp16fml Enable FP16 fmla extension. This also enables FP16 extensions and floating-point instructions. This option is enabled by default for -march=armv8.4-a. Use of this option with architectures prior to Armv8.2-A is not supported. rcpc Enable the RcPc extension. This does not change code generation from GCC, but is passed on to the assembler, enabling inline asm statements to use instructions from the RcPc extension. dotprod Enable the Dot Product extension. This also enables Advanced SIMD instructions. aes Enable the Armv8-a aes and pmull crypto extension. This also enables Advanced SIMD instructions. sha2 Enable the Armv8-a sha2 crypto extension. This also enables Advanced SIMD instructions. sha3 Enable the sha512 and sha3 crypto extension. This also enables Advanced SIMD instructions. Use of this option with architectures prior to Armv8.2-A is not supported. sm4 Enable the sm3 and sm4 crypto extension. This also enables Advanced SIMD instructions. Use of this option with architectures prior to Armv8.2-A is not supported. profile Enable the Statistical Profiling extension. This option is only to enable the extension at the assembler level and does not affect code generation. rng Enable the Armv8.5-a Random Number instructions. This option is only to enable the extension at the assembler level and does not affect code generation. memtag Enable the Armv8.5-a Memory Tagging Extensions. This option is only to enable the extension at the assembler level and does not affect code generation. sb Enable the Armv8-a Speculation Barrier instruction. This option is only to enable the extension at the assembler level and does not affect code generation. This option is enabled by default for -march=armv8.5-a. ssbs Enable the Armv8-a Speculative Store Bypass Safe instruction. This option is only to enable the extension at the assembler level and does not affect code generation. This option is enabled by default for -march=armv8.5-a. predres Enable the Armv8-a Execution and Data Prediction Restriction instructions. This option is only to enable the extension at the assembler level and does not affect code generation. This option is enabled by default for -march=armv8.5-a. Feature crypto implies aes, sha2, and simd, which implies fp. Conversely, nofp implies nosimd, which implies nocrypto, noaes and nosha2. Adapteva Epiphany Options These -m options are defined for Adapteva Epiphany: -mhalf-reg-file Don't allocate any register in the range "r32"..."r63". That allows code to run on hardware variants that lack these registers. -mprefer-short-insn-regs Preferentially allocate registers that allow short instruction generation. This can result in increased instruction count, so this may either reduce or increase overall code size. -mbranch-cost=num Set the cost of branches to roughly num "simple" instructions. This cost is only a heuristic and is not guaranteed to produce consistent results across releases. -mcmove Enable the generation of conditional moves. -mnops=num Emit num NOPs before every other generated instruction. -mno-soft-cmpsf For single-precision floating-point comparisons, emit an "fsub" instruction and test the flags. This is faster than a software comparison, but can get incorrect results in the presence of NaNs, or when two different small numbers are compared such that their difference is calculated as zero. The default is -msoft-cmpsf, which uses slower, but IEEE- compliant, software comparisons. -mstack-offset=num Set the offset between the top of the stack and the stack pointer. E.g., a value of 8 means that the eight bytes in the range "sp+0...sp+7" can be used by leaf functions without stack allocation. Values other than 8 or 16 are untested and unlikely to work. Note also that this option changes the ABI; compiling a program with a different stack offset than the libraries have been compiled with generally does not work. This option can be useful if you want to evaluate if a different stack offset would give you better code, but to actually use a different stack offset to build working programs, it is recommended to configure the toolchain with the appropriate --with-stack-offset=num option. -mno-round-nearest Make the scheduler assume that the rounding mode has been set to truncating. The default is -mround-nearest. -mlong-calls If not otherwise specified by an attribute, assume all calls might be beyond the offset range of the "b" / "bl" instructions, and therefore load the function address into a register before performing a (otherwise direct) call. This is the default. -mshort-calls If not otherwise specified by an attribute, assume all direct calls are in the range of the "b" / "bl" instructions, so use these instructions for direct calls. The default is -mlong-calls. -msmall16 Assume addresses can be loaded as 16-bit unsigned values. This does not apply to function addresses for which -mlong-calls semantics are in effect. -mfp-mode=mode Set the prevailing mode of the floating-point unit. This determines the floating-point mode that is provided and expected at function call and return time. Making this mode match the mode you predominantly need at function start can make your programs smaller and faster by avoiding unnecessary mode switches. mode can be set to one the following values: caller Any mode at function entry is valid, and retained or restored when the function returns, and when it calls other functions. This mode is useful for compiling libraries or other compilation units you might want to incorporate into different programs with different prevailing FPU modes, and the convenience of being able to use a single object file outweighs the size and speed overhead for any extra mode switching that might be needed, compared with what would be needed with a more specific choice of prevailing FPU mode. truncate This is the mode used for floating-point calculations with truncating (i.e. round towards zero) rounding mode. That includes conversion from floating point to integer. round-nearest This is the mode used for floating-point calculations with round-to-nearest-or-even rounding mode. int This is the mode used to perform integer calculations in the FPU, e.g. integer multiply, or integer multiply-and- accumulate. The default is -mfp-mode=caller -mno-split-lohi -mno-postinc -mno-postmodify Code generation tweaks that disable, respectively, splitting of 32-bit loads, generation of post-increment addresses, and generation of post-modify addresses. The defaults are msplit-lohi, -mpost-inc, and -mpost-modify. -mnovect-double Change the preferred SIMD mode to SImode. The default is -mvect-double, which uses DImode as preferred SIMD mode. -max-vect-align=num The maximum alignment for SIMD vector mode types. num may be 4 or 8. The default is 8. Note that this is an ABI change, even though many library function interfaces are unaffected if they don't use SIMD vector modes in places that affect size and/or alignment of relevant types. -msplit-vecmove-early Split vector moves into single word moves before reload. In theory this can give better register allocation, but so far the reverse seems to be generally the case. -m1reg-reg Specify a register to hold the constant -1, which makes loading small negative constants and certain bitmasks faster. Allowable values for reg are r43 and r63, which specify use of that register as a fixed register, and none, which means that no register is used for this purpose. The default is -m1reg-none. AMD GCN Options These options are defined specifically for the AMD GCN port. -march=gpu -mtune=gpu Set architecture type or tuning for gpu. Supported values for gpu are fiji Compile for GCN3 Fiji devices (gfx803). gfx900 Compile for GCN5 Vega 10 devices (gfx900). -mstack-size=bytes Specify how many bytes of stack space will be requested for each GPU thread (wave-front). Beware that there may be many threads and limited memory available. The size of the stack allocation may also have an impact on run-time performance. The default is 32KB when using OpenACC or OpenMP, and 1MB otherwise. ARC Options The following options control the architecture variant for which code is being compiled: -mbarrel-shifter Generate instructions supported by barrel shifter. This is the default unless -mcpu=ARC601 or -mcpu=ARCEM is in effect. -mjli-always Force to call a function using jli_s instruction. This option is valid only for ARCv2 architecture. -mcpu=cpu Set architecture type, register usage, and instruction scheduling parameters for cpu. There are also shortcut alias options available for backward compatibility and convenience. Supported values for cpu are arc600 Compile for ARC600. Aliases: -mA6, -mARC600. arc601 Compile for ARC601. Alias: -mARC601. arc700 Compile for ARC700. Aliases: -mA7, -mARC700. This is the default when configured with --with-cpu=arc700. arcem Compile for ARC EM. archs Compile for ARC HS. em Compile for ARC EM CPU with no hardware extensions. em4 Compile for ARC EM4 CPU. em4_dmips Compile for ARC EM4 DMIPS CPU. em4_fpus Compile for ARC EM4 DMIPS CPU with the single-precision floating-point extension. em4_fpuda Compile for ARC EM4 DMIPS CPU with single-precision floating-point and double assist instructions. hs Compile for ARC HS CPU with no hardware extensions except the atomic instructions. hs34 Compile for ARC HS34 CPU. hs38 Compile for ARC HS38 CPU. hs38_linux Compile for ARC HS38 CPU with all hardware extensions on. arc600_norm Compile for ARC 600 CPU with "norm" instructions enabled. arc600_mul32x16 Compile for ARC 600 CPU with "norm" and 32x16-bit multiply instructions enabled. arc600_mul64 Compile for ARC 600 CPU with "norm" and "mul64"-family instructions enabled. arc601_norm Compile for ARC 601 CPU with "norm" instructions enabled. arc601_mul32x16 Compile for ARC 601 CPU with "norm" and 32x16-bit multiply instructions enabled. arc601_mul64 Compile for ARC 601 CPU with "norm" and "mul64"-family instructions enabled. nps400 Compile for ARC 700 on NPS400 chip. em_mini Compile for ARC EM minimalist configuration featuring reduced register set. -mdpfp -mdpfp-compact Generate double-precision FPX instructions, tuned for the compact implementation. -mdpfp-fast Generate double-precision FPX instructions, tuned for the fast implementation. -mno-dpfp-lrsr Disable "lr" and "sr" instructions from using FPX extension aux registers. -mea Generate extended arithmetic instructions. Currently only "divaw", "adds", "subs", and "sat16" are supported. This is always enabled for -mcpu=ARC700. -mno-mpy Do not generate "mpy"-family instructions for ARC700. This option is deprecated. -mmul32x16 Generate 32x16-bit multiply and multiply-accumulate instructions. -mmul64 Generate "mul64" and "mulu64" instructions. Only valid for -mcpu=ARC600. -mnorm Generate "norm" instructions. This is the default if -mcpu=ARC700 is in effect. -mspfp -mspfp-compact Generate single-precision FPX instructions, tuned for the compact implementation. -mspfp-fast Generate single-precision FPX instructions, tuned for the fast implementation. -msimd Enable generation of ARC SIMD instructions via target- specific builtins. Only valid for -mcpu=ARC700. -msoft-float This option ignored; it is provided for compatibility purposes only. Software floating-point code is emitted by default, and this default can overridden by FPX options; -mspfp, -mspfp-compact, or -mspfp-fast for single precision, and -mdpfp, -mdpfp-compact, or -mdpfp-fast for double precision. -mswap Generate "swap" instructions. -matomic This enables use of the locked load/store conditional extension to implement atomic memory built-in functions. Not available for ARC 6xx or ARC EM cores. -mdiv-rem Enable "div" and "rem" instructions for ARCv2 cores. -mcode-density Enable code density instructions for ARC EM. This option is on by default for ARC HS. -mll64 Enable double load/store operations for ARC HS cores. -mtp-regno=regno Specify thread pointer register number. -mmpy-option=multo Compile ARCv2 code with a multiplier design option. You can specify the option using either a string or numeric value for multo. wlh1 is the default value. The recognized values are: 0 none No multiplier available. 1 w 16x16 multiplier, fully pipelined. The following instructions are enabled: "mpyw" and "mpyuw". 2 wlh1 32x32 multiplier, fully pipelined (1 stage). The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 3 wlh2 32x32 multiplier, fully pipelined (2 stages). The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 4 wlh3 Two 16x16 multipliers, blocking, sequential. The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 5 wlh4 One 16x16 multiplier, blocking, sequential. The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 6 wlh5 One 32x4 multiplier, blocking, sequential. The following instructions are additionally enabled: "mpy", "mpyu", "mpym", "mpymu", and "mpy_s". 7 plus_dmpy ARC HS SIMD support. 8 plus_macd ARC HS SIMD support. 9 plus_qmacw ARC HS SIMD support. This option is only available for ARCv2 cores. -mfpu=fpu Enables support for specific floating-point hardware extensions for ARCv2 cores. Supported values for fpu are: fpus Enables support for single-precision floating-point hardware extensions. fpud Enables support for double-precision floating-point hardware extensions. The single-precision floating-point extension is also enabled. Not available for ARC EM. fpuda Enables support for double-precision floating-point hardware extensions using double-precision assist instructions. The single-precision floating-point extension is also enabled. This option is only available for ARC EM. fpuda_div Enables support for double-precision floating-point hardware extensions using double-precision assist instructions. The single-precision floating-point, square-root, and divide extensions are also enabled. This option is only available for ARC EM. fpuda_fma Enables support for double-precision floating-point hardware extensions using double-precision assist instructions. The single-precision floating-point and fused multiply and add hardware extensions are also enabled. This option is only available for ARC EM. fpuda_all Enables support for double-precision floating-point hardware extensions using double-precision assist instructions. All single-precision floating-point hardware extensions are also enabled. This option is only available for ARC EM. fpus_div Enables support for single-precision floating-point, square-root and divide hardware extensions. fpud_div Enables support for double-precision floating-point, square-root and divide hardware extensions. This option includes option fpus_div. Not available for ARC EM. fpus_fma Enables support for single-precision floating-point and fused multiply and add hardware extensions. fpud_fma Enables support for double-precision floating-point and fused multiply and add hardware extensions. This option includes option fpus_fma. Not available for ARC EM. fpus_all Enables support for all single-precision floating-point hardware extensions. fpud_all Enables support for all single- and double-precision floating-point hardware extensions. Not available for ARC EM. -mirq-ctrl-saved=register-range, blink, lp_count Specifies general-purposes registers that the processor automatically saves/restores on interrupt entry and exit. register-range is specified as two registers separated by a dash. The register range always starts with "r0", the upper limit is "fp" register. blink and lp_count are optional. This option is only valid for ARC EM and ARC HS cores. -mrgf-banked-regs=number Specifies the number of registers replicated in second register bank on entry to fast interrupt. Fast interrupts are interrupts with the highest priority level P0. These interrupts save only PC and STATUS32 registers to avoid memory transactions during interrupt entry and exit sequences. Use this option when you are using fast interrupts in an ARC V2 family processor. Permitted values are 4, 8, 16, and 32. -mlpc-width=width Specify the width of the "lp_count" register. Valid values for width are 8, 16, 20, 24, 28 and 32 bits. The default width is fixed to 32 bits. If the width is less than 32, the compiler does not attempt to transform loops in your program to use the zero-delay loop mechanism unless it is known that the "lp_count" register can hold the required loop-counter value. Depending on the width specified, the compiler and run-time library might continue to use the loop mechanism for various needs. This option defines macro "__ARC_LPC_WIDTH__" with the value of width. -mrf16 This option instructs the compiler to generate code for a 16-entry register file. This option defines the "__ARC_RF16__" preprocessor macro. -mbranch-index Enable use of "bi" or "bih" instructions to implement jump tables. The following options are passed through to the assembler, and also define preprocessor macro symbols. -mdsp-packa Passed down to the assembler to enable the DSP Pack A extensions. Also sets the preprocessor symbol "__Xdsp_packa". This option is deprecated. -mdvbf Passed down to the assembler to enable the dual Viterbi butterfly extension. Also sets the preprocessor symbol "__Xdvbf". This option is deprecated. -mlock Passed down to the assembler to enable the locked load/store conditional extension. Also sets the preprocessor symbol "__Xlock". -mmac-d16 Passed down to the assembler. Also sets the preprocessor symbol "__Xxmac_d16". This option is deprecated. -mmac-24 Passed down to the assembler. Also sets the preprocessor symbol "__Xxmac_24". This option is deprecated. -mrtsc Passed down to the assembler to enable the 64-bit time-stamp counter extension instruction. Also sets the preprocessor symbol "__Xrtsc". This option is deprecated. -mswape Passed down to the assembler to enable the swap byte ordering extension instruction. Also sets the preprocessor symbol "__Xswape". -mtelephony Passed down to the assembler to enable dual- and single- operand instructions for telephony. Also sets the preprocessor symbol "__Xtelephony". This option is deprecated. -mxy Passed down to the assembler to enable the XY memory extension. Also sets the preprocessor symbol "__Xxy". The following options control how the assembly code is annotated: -misize Annotate assembler instructions with estimated addresses. -mannotate-align Explain what alignment considerations lead to the decision to make an instruction short or long. The following options are passed through to the linker: -marclinux Passed through to the linker, to specify use of the "arclinux" emulation. This option is enabled by default in tool chains built for "arc-linux-uclibc" and "arceb-linux-uclibc" targets when profiling is not requested. -marclinux_prof Passed through to the linker, to specify use of the "arclinux_prof" emulation. This option is enabled by default in tool chains built for "arc-linux-uclibc" and "arceb-linux-uclibc" targets when profiling is requested. The following options control the semantics of generated code: -mlong-calls Generate calls as register indirect calls, thus providing access to the full 32-bit address range. -mmedium-calls Don't use less than 25-bit addressing range for calls, which is the offset available for an unconditional branch-and-link instruction. Conditional execution of function calls is suppressed, to allow use of the 25-bit range, rather than the 21-bit range with conditional branch-and-link. This is the default for tool chains built for "arc-linux-uclibc" and "arceb-linux-uclibc" targets. -G num Put definitions of externally-visible data in a small data section if that data is no bigger than num bytes. The default value of num is 4 for any ARC configuration, or 8 when we have double load/store operations. -mno-sdata Do not generate sdata references. This is the default for tool chains built for "arc-linux-uclibc" and "arceb-linux-uclibc" targets. -mvolatile-cache Use ordinarily cached memory accesses for volatile references. This is the default. -mno-volatile-cache Enable cache bypass for volatile references. The following options fine tune code generation: -malign-call Do alignment optimizations for call instructions. -mauto-modify-reg Enable the use of pre/post modify with register displacement. -mbbit-peephole Enable bbit peephole2. -mno-brcc This option disables a target-specific pass in arc_reorg to generate compare-and-branch ("brcc") instructions. It has no effect on generation of these instructions driven by the combiner pass. -mcase-vector-pcrel Use PC-relative switch case tables to enable case table shortening. This is the default for -Os. -mcompact-casesi Enable compact "casesi" pattern. This is the default for -Os, and only available for ARCv1 cores. This option is deprecated. -mno-cond-exec Disable the ARCompact-specific pass to generate conditional execution instructions. Due to delay slot scheduling and interactions between operand numbers, literal sizes, instruction lengths, and the support for conditional execution, the target-independent pass to generate conditional execution is often lacking, so the ARC port has kept a special pass around that tries to find more conditional execution generation opportunities after register allocation, branch shortening, and delay slot scheduling have been done. This pass generally, but not always, improves performance and code size, at the cost of extra compilation time, which is why there is an option to switch it off. If you have a problem with call instructions exceeding their allowable offset range because they are conditionalized, you should consider using -mmedium-calls instead. -mearly-cbranchsi Enable pre-reload use of the "cbranchsi" pattern. -mexpand-adddi Expand "adddi3" and "subdi3" at RTL generation time into "add.f", "adc" etc. This option is deprecated. -mindexed-loads Enable the use of indexed loads. This can be problematic because some optimizers then assume that indexed stores exist, which is not the case. -mlra Enable Local Register Allocation. This is still experimental for ARC, so by default the compiler uses standard reload (i.e. -mno-lra). -mlra-priority-none Don't indicate any priority for target registers. -mlra-priority-compact Indicate target register priority for r0..r3 / r12..r15. -mlra-priority-noncompact Reduce target register priority for r0..r3 / r12..r15. -mmillicode When optimizing for size (using -Os), prologues and epilogues that have to save or restore a large number of registers are often shortened by using call to a special function in libgcc; this is referred to as a millicode call. As these calls can pose performance issues, and/or cause linking issues when linking in a nonstandard way, this option is provided to turn on or off millicode call generation. -mcode-density-frame This option enable the compiler to emit "enter" and "leave" instructions. These instructions are only valid for CPUs with code-density feature. -mmixed-code Tweak register allocation to help 16-bit instruction generation. This generally has the effect of decreasing the average instruction size while increasing the instruction count. -mq-class Enable q instruction alternatives. This is the default for -Os. -mRcq Enable Rcq constraint handling. Most short code generation depends on this. This is the default. -mRcw Enable Rcw constraint handling. Most ccfsm condexec mostly depends on this. This is the default. -msize-level=level Fine-tune size optimization with regards to instruction lengths and alignment. The recognized values for level are: 0 No size optimization. This level is deprecated and treated like 1. 1 Short instructions are used opportunistically. 2 In addition, alignment of loops and of code after barriers are dropped. 3 In addition, optional data alignment is dropped, and the option Os is enabled. This defaults to 3 when -Os is in effect. Otherwise, the behavior when this is not set is equivalent to level 1. -mtune=cpu Set instruction scheduling parameters for cpu, overriding any implied by -mcpu=. Supported values for cpu are ARC600 Tune for ARC600 CPU. ARC601 Tune for ARC601 CPU. ARC700 Tune for ARC700 CPU with standard multiplier block. ARC700-xmac Tune for ARC700 CPU with XMAC block. ARC725D Tune for ARC725D CPU. ARC750D Tune for ARC750D CPU. -mmultcost=num Cost to assume for a multiply instruction, with 4 being equal to a normal instruction. -munalign-prob-threshold=probability Set probability threshold for unaligning branches. When tuning for ARC700 and optimizing for speed, branches without filled delay slot are preferably emitted unaligned and long, unless profiling indicates that the probability for the branch to be taken is below probability. The default is (REG_BR_PROB_BASE/2), i.e. 5000. The following options are maintained for backward compatibility, but are now deprecated and will be removed in a future release: -margonaut Obsolete FPX. -mbig-endian -EB Compile code for big-endian targets. Use of these options is now deprecated. Big-endian code is supported by configuring GCC to build "arceb-elf32" and "arceb-linux-uclibc" targets, for which big endian is the default. -mlittle-endian -EL Compile code for little-endian targets. Use of these options is now deprecated. Little-endian code is supported by configuring GCC to build "arc-elf32" and "arc-linux-uclibc" targets, for which little endian is the default. -mbarrel_shifter Replaced by -mbarrel-shifter. -mdpfp_compact Replaced by -mdpfp-compact. -mdpfp_fast Replaced by -mdpfp-fast. -mdsp_packa Replaced by -mdsp-packa. -mEA Replaced by -mea. -mmac_24 Replaced by -mmac-24. -mmac_d16 Replaced by -mmac-d16. -mspfp_compact Replaced by -mspfp-compact. -mspfp_fast Replaced by -mspfp-fast. -mtune=cpu Values arc600, arc601, arc700 and arc700-xmac for cpu are replaced by ARC600, ARC601, ARC700 and ARC700-xmac respectively. -multcost=num Replaced by -mmultcost. ARM Options These -m options are defined for the ARM port: -mabi=name Generate code for the specified ABI. Permissible values are: apcs-gnu, atpcs, aapcs, aapcs-linux and iwmmxt. -mapcs-frame Generate a stack frame that is compliant with the ARM Procedure Call Standard for all functions, even if this is not strictly necessary for correct execution of the code. Specifying -fomit-frame-pointer with this option causes the stack frames not to be generated for leaf functions. The default is -mno-apcs-frame. This option is deprecated. -mapcs This is a synonym for -mapcs-frame and is deprecated. -mthumb-interwork Generate code that supports calling between the ARM and Thumb instruction sets. Without this option, on pre-v5 architectures, the two instruction sets cannot be reliably used inside one program. The default is -mno-thumb-interwork, since slightly larger code is generated when -mthumb-interwork is specified. In AAPCS configurations this option is meaningless. -mno-sched-prolog Prevent the reordering of instructions in the function prologue, or the merging of those instruction with the instructions in the function's body. This means that all functions start with a recognizable set of instructions (or in fact one of a choice from a small set of different function prologues), and this information can be used to locate the start of functions inside an executable piece of code. The default is -msched-prolog. -mfloat-abi=name Specifies which floating-point ABI to use. Permissible values are: soft, softfp and hard. Specifying soft causes GCC to generate output containing library calls for floating-point operations. softfp allows the generation of code using hardware floating-point instructions, but still uses the soft-float calling conventions. hard allows generation of floating-point instructions and uses FPU-specific calling conventions. The default depends on the specific target configuration. Note that the hard-float and soft-float ABIs are not link- compatible; you must compile your entire program with the same ABI, and link with a compatible set of libraries. -mgeneral-regs-only Generate code which uses only the general-purpose registers. This will prevent the compiler from using floating-point and Advanced SIMD registers but will not impose any restrictions on the assembler. -mlittle-endian Generate code for a processor running in little-endian mode. This is the default for all standard configurations. -mbig-endian Generate code for a processor running in big-endian mode; the default is to compile code for a little-endian processor. -mbe8 -mbe32 When linking a big-endian image select between BE8 and BE32 formats. The option has no effect for little-endian images and is ignored. The default is dependent on the selected target architecture. For ARMv6 and later architectures the default is BE8, for older architectures the default is BE32. BE32 format has been deprecated by ARM. -march=name[+extension...] This specifies the name of the target ARM architecture. GCC uses this name to determine what kind of instructions it can emit when generating assembly code. This option can be used in conjunction with or instead of the -mcpu= option. Permissible names are: armv4t, armv5t, armv5te, armv6, armv6j, armv6k, armv6kz, armv6t2, armv6z, armv6zk, armv7, armv7-a, armv7ve, armv8-a, armv8.1-a, armv8.2-a, armv8.3-a, armv8.4-a, armv8.5-a, armv7-r, armv8-r, armv6-m, armv6s-m, armv7-m, armv7e-m, armv8-m.base, armv8-m.main, iwmmxt and iwmmxt2. Additionally, the following architectures, which lack support for the Thumb execution state, are recognized but support is deprecated: armv4. Many of the architectures support extensions. These can be added by appending +extension to the architecture name. Extension options are processed in order and capabilities accumulate. An extension will also enable any necessary base extensions upon which it depends. For example, the +crypto extension will always enable the +simd extension. The exception to the additive construction is for extensions that are prefixed with +no...: these extensions disable the specified option and any other extensions that may depend on the presence of that extension. For example, -march=armv7-a+simd+nofp+vfpv4 is equivalent to writing -march=armv7-a+vfpv4 since the +simd option is entirely disabled by the +nofp option that follows it. Most extension names are generically named, but have an effect that is dependent upon the architecture to which it is applied. For example, the +simd option can be applied to both armv7-a and armv8-a architectures, but will enable the original ARMv7-A Advanced SIMD (Neon) extensions for armv7-a and the ARMv8-A variant for armv8-a. The table below lists the supported extensions for each architecture. Architectures not mentioned do not support any extensions. armv5te armv6 armv6j armv6k armv6kz armv6t2 armv6z armv6zk +fp The VFPv2 floating-point instructions. The extension +vfpv2 can be used as an alias for this extension. +nofp Disable the floating-point instructions. armv7 The common subset of the ARMv7-A, ARMv7-R and ARMv7-M architectures. +fp The VFPv3 floating-point instructions, with 16 double-precision registers. The extension +vfpv3-d16 can be used as an alias for this extension. Note that floating-point is not supported by the base ARMv7-M architecture, but is compatible with both the ARMv7-A and ARMv7-R architectures. +nofp Disable the floating-point instructions. armv7-a +mp The multiprocessing extension. +sec The security extension. +fp The VFPv3 floating-point instructions, with 16 double-precision registers. The extension +vfpv3-d16 can be used as an alias for this extension. +simd The Advanced SIMD (Neon) v1 and the VFPv3 floating- point instructions. The extensions +neon and +neon-vfpv3 can be used as aliases for this extension. +vfpv3 The VFPv3 floating-point instructions, with 32 double-precision registers. +vfpv3-d16-fp16 The VFPv3 floating-point instructions, with 16 double-precision registers and the half-precision floating-point conversion operations. +vfpv3-fp16 The VFPv3 floating-point instructions, with 32 double-precision registers and the half-precision floating-point conversion operations. +vfpv4-d16 The VFPv4 floating-point instructions, with 16 double-precision registers. +vfpv4 The VFPv4 floating-point instructions, with 32 double-precision registers. +neon-fp16 The Advanced SIMD (Neon) v1 and the VFPv3 floating- point instructions, with the half-precision floating- point conversion operations. +neon-vfpv4 The Advanced SIMD (Neon) v2 and the VFPv4 floating- point instructions. +nosimd Disable the Advanced SIMD instructions (does not disable floating point). +nofp Disable the floating-point and Advanced SIMD instructions. armv7ve The extended version of the ARMv7-A architecture with support for virtualization. +fp The VFPv4 floating-point instructions, with 16 double-precision registers. The extension +vfpv4-d16 can be used as an alias for this extension. +simd The Advanced SIMD (Neon) v2 and the VFPv4 floating- point instructions. The extension +neon-vfpv4 can be used as an alias for this extension. +vfpv3-d16 The VFPv3 floating-point instructions, with 16 double-precision registers. +vfpv3 The VFPv3 floating-point instructions, with 32 double-precision registers. +vfpv3-d16-fp16 The VFPv3 floating-point instructions, with 16 double-precision registers and the half-precision floating-point conversion operations. +vfpv3-fp16 The VFPv3 floating-point instructions, with 32 double-precision registers and the half-precision floating-point conversion operations. +vfpv4-d16 The VFPv4 floating-point instructions, with 16 double-precision registers. +vfpv4 The VFPv4 floating-point instructions, with 32 double-precision registers. +neon The Advanced SIMD (Neon) v1 and the VFPv3 floating- point instructions. The extension +neon-vfpv3 can be used as an alias for this extension. +neon-fp16 The Advanced SIMD (Neon) v1 and the VFPv3 floating- point instructions, with the half-precision floating- point conversion operations. +nosimd Disable the Advanced SIMD instructions (does not disable floating point). +nofp Disable the floating-point and Advanced SIMD instructions. armv8-a +crc The Cyclic Redundancy Check (CRC) instructions. +simd The ARMv8-A Advanced SIMD and floating-point instructions. +crypto The cryptographic instructions. +nocrypto Disable the cryptographic instructions. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. +sb Speculation Barrier Instruction. +predres Execution and Data Prediction Restriction Instructions. armv8.1-a +simd The ARMv8.1-A Advanced SIMD and floating-point instructions. +crypto The cryptographic instructions. This also enables the Advanced SIMD and floating-point instructions. +nocrypto Disable the cryptographic instructions. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. +sb Speculation Barrier Instruction. +predres Execution and Data Prediction Restriction Instructions. armv8.2-a armv8.3-a +fp16 The half-precision floating-point data processing instructions. This also enables the Advanced SIMD and floating-point instructions. +fp16fml The half-precision floating-point fmla extension. This also enables the half-precision floating-point extension and Advanced SIMD and floating-point instructions. +simd The ARMv8.1-A Advanced SIMD and floating-point instructions. +crypto The cryptographic instructions. This also enables the Advanced SIMD and floating-point instructions. +dotprod Enable the Dot Product extension. This also enables Advanced SIMD instructions. +nocrypto Disable the cryptographic extension. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. +sb Speculation Barrier Instruction. +predres Execution and Data Prediction Restriction Instructions. armv8.4-a +fp16 The half-precision floating-point data processing instructions. This also enables the Advanced SIMD and floating-point instructions as well as the Dot Product extension and the half-precision floating- point fmla extension. +simd The ARMv8.3-A Advanced SIMD and floating-point instructions as well as the Dot Product extension. +crypto The cryptographic instructions. This also enables the Advanced SIMD and floating-point instructions as well as the Dot Product extension. +nocrypto Disable the cryptographic extension. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. +sb Speculation Barrier Instruction. +predres Execution and Data Prediction Restriction Instructions. armv8.5-a +fp16 The half-precision floating-point data processing instructions. This also enables the Advanced SIMD and floating-point instructions as well as the Dot Product extension and the half-precision floating- point fmla extension. +simd The ARMv8.3-A Advanced SIMD and floating-point instructions as well as the Dot Product extension. +crypto The cryptographic instructions. This also enables the Advanced SIMD and floating-point instructions as well as the Dot Product extension. +nocrypto Disable the cryptographic extension. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. armv7-r +fp.sp The single-precision VFPv3 floating-point instructions. The extension +vfpv3xd can be used as an alias for this extension. +fp The VFPv3 floating-point instructions with 16 double- precision registers. The extension +vfpv3-d16 can be used as an alias for this extension. +vfpv3xd-d16-fp16 The single-precision VFPv3 floating-point instructions with 16 double-precision registers and the half-precision floating-point conversion operations. +vfpv3-d16-fp16 The VFPv3 floating-point instructions with 16 double- precision registers and the half-precision floating- point conversion operations. +nofp Disable the floating-point extension. +idiv The ARM-state integer division instructions. +noidiv Disable the ARM-state integer division extension. armv7e-m +fp The single-precision VFPv4 floating-point instructions. +fpv5 The single-precision FPv5 floating-point instructions. +fp.dp The single- and double-precision FPv5 floating-point instructions. +nofp Disable the floating-point extensions. armv8-m.main +dsp The DSP instructions. +nodsp Disable the DSP extension. +fp The single-precision floating-point instructions. +fp.dp The single- and double-precision floating-point instructions. +nofp Disable the floating-point extension. armv8-r +crc The Cyclic Redundancy Check (CRC) instructions. +fp.sp The single-precision FPv5 floating-point instructions. +simd The ARMv8-A Advanced SIMD and floating-point instructions. +crypto The cryptographic instructions. +nocrypto Disable the cryptographic instructions. +nofp Disable the floating-point, Advanced SIMD and cryptographic instructions. -march=native causes the compiler to auto-detect the architecture of the build computer. At present, this feature is only supported on GNU/Linux, and not all architectures are recognized. If the auto-detect is unsuccessful the option has no effect. -mtune=name This option specifies the name of the target ARM processor for which GCC should tune the performance of the code. For some ARM implementations better performance can be obtained by using this option. Permissible names are: arm7tdmi, arm7tdmi-s, arm710t, arm720t, arm740t, strongarm, strongarm110, strongarm1100, 0strongarm1110, arm8, arm810, arm9, arm9e, arm920, arm920t, arm922t, arm946e-s, arm966e-s, arm968e-s, arm926ej-s, arm940t, arm9tdmi, arm10tdmi, arm1020t, arm1026ej-s, arm10e, arm1020e, arm1022e, arm1136j-s, arm1136jf-s, mpcore, mpcorenovfp, arm1156t2-s, arm1156t2f-s, arm1176jz-s, arm1176jzf-s, generic-armv7-a, cortex-a5, cortex-a7, cortex-a8, cortex-a9, cortex-a12, cortex-a15, cortex-a17, cortex-a32, cortex-a35, cortex-a53, cortex-a55, cortex-a57, cortex-a72, cortex-a73, cortex-a75, cortex-a76, ares, cortex-r4, cortex-r4f, cortex-r5, cortex-r7, cortex-r8, cortex-r52, cortex-m0, cortex-m0plus, cortex-m1, cortex-m3, cortex-m4, cortex-m7, cortex-m23, cortex-m33, cortex-m1.small-multiply, cortex-m0.small-multiply, cortex-m0plus.small-multiply, exynos-m1, marvell-pj4, neoverse-n1, neoverse-n2, neoverse-v1, xscale, iwmmxt, iwmmxt2, ep9312, fa526, fa626, fa606te, fa626te, fmp626, fa726te, xgene1. Additionally, this option can specify that GCC should tune the performance of the code for a big.LITTLE system. Permissible names are: cortex-a15.cortex-a7, cortex-a17.cortex-a7, cortex-a57.cortex-a53, cortex-a72.cortex-a53, cortex-a72.cortex-a35, cortex-a73.cortex-a53, cortex-a75.cortex-a55, cortex-a76.cortex-a55. -mtune=generic-arch specifies that GCC should tune the performance for a blend of processors within architecture arch. The aim is to generate code that run well on the current most popular processors, balancing between optimizations that benefit some CPUs in the range, and avoiding performance pitfalls of other CPUs. The effects of this option may change in future GCC versions as CPU models come and go. -mtune permits the same extension options as -mcpu, but the extension options do not affect the tuning of the generated code. -mtune=native causes the compiler to auto-detect the CPU of the build computer. At present, this feature is only supported on GNU/Linux, and not all architectures are recognized. If the auto-detect is unsuccessful the option has no effect. -mcpu=name[+extension...] This specifies the name of the target ARM processor. GCC uses this name to derive the name of the target ARM architecture (as if specified by -march) and the ARM processor type for which to tune for performance (as if specified by -mtune). Where this option is used in conjunction with -march or -mtune, those options take precedence over the appropriate part of this option. Many of the supported CPUs implement optional architectural extensions. Where this is so the architectural extensions are normally enabled by default. If implementations that lack the extension exist, then the extension syntax can be used to disable those extensions that have been omitted. For floating-point and Advanced SIMD (Neon) instructions, the settings of the options -mfloat-abi and -mfpu must also be considered: floating-point and Advanced SIMD instructions will only be used if -mfloat-abi is not set to soft; and any setting of -mfpu other than auto will override the available floating-point and SIMD extension instructions. For example, cortex-a9 can be found in three major configurations: integer only, with just a floating-point unit or with floating-point and Advanced SIMD. The default is to enable all the instructions, but the extensions +nosimd and +nofp can be used to disable just the SIMD or both the SIMD and floating-point instructions respectively. Permissible names for this option are the same as those for -mtune. The following extension options are common to the listed CPUs: +nodsp Disable the DSP instructions on cortex-m33. +nofp Disables the floating-point instructions on arm9e, arm946e-s, arm966e-s, arm968e-s, arm10e, arm1020e, arm1022e, arm926ej-s, arm1026ej-s, cortex-r5, cortex-r7, cortex-r8, cortex-m4, cortex-m7 and cortex-m33. Disables the floating-point and SIMD instructions on generic-armv7-a, cortex-a5, cortex-a7, cortex-a8, cortex-a9, cortex-a12, cortex-a15, cortex-a17, cortex-a15.cortex-a7, cortex-a17.cortex-a7, cortex-a32, cortex-a35, cortex-a53 and cortex-a55. +nofp.dp Disables the double-precision component of the floating- point instructions on cortex-r5, cortex-r7, cortex-r8, cortex-r52 and cortex-m7. +nosimd Disables the SIMD (but not floating-point) instructions on generic-armv7-a, cortex-a5, cortex-a7 and cortex-a9. +crypto Enables the cryptographic instructions on cortex-a32, cortex-a35, cortex-a53, cortex-a55, cortex-a57, cortex-a72, cortex-a73, cortex-a75, exynos-m1, xgene1, cortex-a57.cortex-a53, cortex-a72.cortex-a53, cortex-a73.cortex-a35, cortex-a73.cortex-a53 and cortex-a75.cortex-a55. Additionally the generic-armv7-a pseudo target defaults to VFPv3 with 16 double-precision registers. It supports the following extension options: mp, sec, vfpv3-d16, vfpv3, vfpv3-d16-fp16, vfpv3-fp16, vfpv4-d16, vfpv4, neon, neon-vfpv3, neon-fp16, neon-vfpv4. The meanings are the same as for the extensions to -march=armv7-a. -mcpu=generic-arch is also permissible, and is equivalent to -march=arch -mtune=generic-arch. See -mtune for more information. -mcpu=native causes the compiler to auto-detect the CPU of the build computer. At present, this feature is only supported on GNU/Linux, and not all architectures are recognized. If the auto-detect is unsuccessful the option has no effect. -mfpu=name This specifies what floating-point hardware (or hardware emulation) is available on the target. Permissible names are: auto, vfpv2, vfpv3, vfpv3-fp16, vfpv3-d16, vfpv3-d16-fp16, vfpv3xd, vfpv3xd-fp16, neon-vfpv3, neon-fp16, vfpv4, vfpv4-d16, fpv4-sp-d16, neon-vfpv4, fpv5-d16, fpv5-sp-d16, fp-armv8, neon-fp-armv8 and crypto-neon-fp-armv8. Note that neon is an alias for neon-vfpv3 and vfp is an alias for vfpv2. The setting auto is the default and is special. It causes the compiler to select the floating-point and Advanced SIMD instructions based on the settings of -mcpu and -march. If the selected floating-point hardware includes the NEON extension (e.g. -mfpu=neon), note that floating-point operations are not generated by GCC's auto-vectorization pass unless -funsafe-math-optimizations is also specified. This is because NEON hardware does not fully implement the IEEE 754 standard for floating-point arithmetic (in particular denormal values are treated as zero), so the use of NEON instructions may lead to a loss of precision. You can also set the fpu name at function level by using the "target("fpu=")" function attributes or pragmas. -mfp16-format=name Specify the format of the "__fp16" half-precision floating- point type. Permissible names are none, ieee, and alternative; the default is none, in which case the "__fp16" type is not defined. -mstructure-size-boundary=n The sizes of all structures and unions are rounded up to a multiple of the number of bits set by this option. Permissible values are 8, 32 and 64. The default value varies for different toolchains. For the COFF targeted toolchain the default value is 8. A value of 64 is only allowed if the underlying ABI supports it. Specifying a larger number can produce faster, more efficient code, but can also increase the size of the program. Different values are potentially incompatible. Code compiled with one value cannot necessarily expect to work with code or libraries compiled with another value, if they exchange information using structures or unions. This option is deprecated. -mabort-on-noreturn Generate a call to the function "abort" at the end of a "noreturn" function. It is executed if the function tries to return. -mlong-calls -mno-long-calls Tells the compiler to perform function calls by first loading the address of the function into a register and then performing a subroutine call on this register. This switch is needed if the target function lies outside of the 64-megabyte addressing range of the offset-based version of subroutine call instruction. Even if this switch is enabled, not all function calls are turned into long calls. The heuristic is that static functions, functions that have the "short_call" attribute, functions that are inside the scope of a "#pragma no_long_calls" directive, and functions whose definitions have already been compiled within the current compilation unit are not turned into long calls. The exceptions to this rule are that weak function definitions, functions with the "long_call" attribute or the "section" attribute, and functions that are within the scope of a "#pragma long_calls" directive are always turned into long calls. This feature is not enabled by default. Specifying -mno-long-calls restores the default behavior, as does placing the function calls within the scope of a "#pragma long_calls_off" directive. Note these switches have no effect on how the compiler generates code to handle function calls via function pointers. -msingle-pic-base Treat the register used for PIC addressing as read-only, rather than loading it in the prologue for each function. The runtime system is responsible for initializing this register with an appropriate value before execution begins. -mpic-register=reg Specify the register to be used for PIC addressing. For standard PIC base case, the default is any suitable register determined by compiler. For single PIC base case, the default is R9 if target is EABI based or stack-checking is enabled, otherwise the default is R10. -mpic-data-is-text-relative Assume that the displacement between the text and data segments is fixed at static link time. This permits using PC-relative addressing operations to access data known to be in the data segment. For non-VxWorks RTP targets, this option is enabled by default. When disabled on such targets, it will enable -msingle-pic-base by default. -mpoke-function-name Write the name of each function into the text section, directly preceding the function prologue. The generated code is similar to this: t0 .ascii "arm_poke_function_name", 0 .align t1 .word 0xff000000 + (t1 - t0) arm_poke_function_name mov ip, sp stmfd sp!, {fp, ip, lr, pc} sub fp, ip, #4 When performing a stack backtrace, code can inspect the value of "pc" stored at "fp + 0". If the trace function then looks at location "pc - 12" and the top 8 bits are set, then we know that there is a function name embedded immediately preceding this location and has length "((pc[-3]) & 0xff000000)". -mthumb -marm Select between generating code that executes in ARM and Thumb states. The default for most configurations is to generate code that executes in ARM state, but the default can be changed by configuring GCC with the --with-mode=state configure option. You can also override the ARM and Thumb mode for each function by using the "target("thumb")" and "target("arm")" function attributes or pragmas. -mflip-thumb Switch ARM/Thumb modes on alternating functions. This option is provided for regression testing of mixed Thumb/ARM code generation, and is not intended for ordinary use in compiling code. -mtpcs-frame Generate a stack frame that is compliant with the Thumb Procedure Call Standard for all non-leaf functions. (A leaf function is one that does not call any other functions.) The default is -mno-tpcs-frame. -mtpcs-leaf-frame Generate a stack frame that is compliant with the Thumb Procedure Call Standard for all leaf functions. (A leaf function is one that does not call any other functions.) The default is -mno-apcs-leaf-frame. -mcallee-super-interworking Gives all externally visible functions in the file being compiled an ARM instruction set header which switches to Thumb mode before executing the rest of the function. This allows these functions to be called from non-interworking code. This option is not valid in AAPCS configurations because interworking is enabled by default. -mcaller-super-interworking Allows calls via function pointers (including virtual functions) to execute correctly regardless of whether the target code has been compiled for interworking or not. There is a small overhead in the cost of executing a function pointer if this option is enabled. This option is not valid in AAPCS configurations because interworking is enabled by default. -mtp=name Specify the access model for the thread local storage pointer. The valid models are soft, which generates calls to "__aeabi_read_tp", cp15, which fetches the thread pointer from "cp15" directly (supported in the arm6k architecture), and auto, which uses the best available method for the selected processor. The default setting is auto. -mtls-dialect=dialect Specify the dialect to use for accessing thread local storage. Two dialects are supported---gnu and gnu2. The gnu dialect selects the original GNU scheme for supporting local and global dynamic TLS models. The gnu2 dialect selects the GNU descriptor scheme, which provides better performance for shared libraries. The GNU descriptor scheme is compatible with the original scheme, but does require new assembler, linker and library support. Initial and local exec TLS models are unaffected by this option and always use the original scheme. -mword-relocations Only generate absolute relocations on word-sized values (i.e. R_ARM_ABS32). This is enabled by default on targets (uClinux, SymbianOS) where the runtime loader imposes this restriction, and when -fpic or -fPIC is specified. This option conflicts with -mslow-flash-data. -mfix-cortex-m3-ldrd Some Cortex-M3 cores can cause data corruption when "ldrd" instructions with overlapping destination and base registers are used. This option avoids generating these instructions. This option is enabled by default when -mcpu=cortex-m3 is specified. -munaligned-access -mno-unaligned-access Enables (or disables) reading and writing of 16- and 32- bit values from addresses that are not 16- or 32- bit aligned. By default unaligned access is disabled for all pre-ARMv6, all ARMv6-M and for ARMv8-M Baseline architectures, and enabled for all other architectures. If unaligned access is not enabled then words in packed data structures are accessed a byte at a time. The ARM attribute "Tag_CPU_unaligned_access" is set in the generated object file to either true or false, depending upon the setting of this option. If unaligned access is enabled then the preprocessor symbol "__ARM_FEATURE_UNALIGNED" is also defined. -mneon-for-64bits Enables using Neon to handle scalar 64-bits operations. This is disabled by default since the cost of moving data from core registers to Neon is high. -mslow-flash-data Assume loading data from flash is slower than fetching instruction. Therefore literal load is minimized for better performance. This option is only supported when compiling for ARMv7 M-profile and off by default. It conflicts with -mword-relocations. -masm-syntax-unified Assume inline assembler is using unified asm syntax. The default is currently off which implies divided syntax. This option has no impact on Thumb2. However, this may change in future releases of GCC. Divided syntax should be considered deprecated. -mrestrict-it Restricts generation of IT blocks to conform to the rules of ARMv8-A. IT blocks can only contain a single 16-bit instruction from a select set of instructions. This option is on by default for ARMv8-A Thumb mode. -mprint-tune-info Print CPU tuning information as comment in assembler file. This is an option used only for regression testing of the compiler and not intended for ordinary use in compiling code. This option is disabled by default. -mverbose-cost-dump Enable verbose cost model dumping in the debug dump files. This option is provided for use in debugging the compiler. -mpure-code Do not allow constant data to be placed in code sections. Additionally, when compiling for ELF object format give all text sections the ELF processor-specific section attribute "SHF_ARM_PURECODE". This option is only available when generating non-pic code for M-profile targets. -mcmse Generate secure code as per the "ARMv8-M Security Extensions: Requirements on Development Tools Engineering Specification", which can be found on <https://developer.arm.com/documentation/ecm0359818/latest/ >. AVR Options These options are defined for AVR implementations: -mmcu=mcu Specify Atmel AVR instruction set architectures (ISA) or MCU type. The default for this option is avr2. GCC supports the following AVR devices and ISAs: "avr2" "Classic" devices with up to 8 KiB of program memory. mcu = "attiny22", "attiny26", "at90s2313", "at90s2323", "at90s2333", "at90s2343", "at90s4414", "at90s4433", "at90s4434", "at90c8534", "at90s8515", "at90s8535". "avr25" "Classic" devices with up to 8 KiB of program memory and with the "MOVW" instruction. mcu = "attiny13", "attiny13a", "attiny24", "attiny24a", "attiny25", "attiny261", "attiny261a", "attiny2313", "attiny2313a", "attiny43u", "attiny44", "attiny44a", "attiny45", "attiny48", "attiny441", "attiny461", "attiny461a", "attiny4313", "attiny84", "attiny84a", "attiny85", "attiny87", "attiny88", "attiny828", "attiny841", "attiny861", "attiny861a", "ata5272", "ata6616c", "at86rf401". "avr3" "Classic" devices with 16 KiB up to 64 KiB of program memory. mcu = "at76c711", "at43usb355". "avr31" "Classic" devices with 128 KiB of program memory. mcu = "atmega103", "at43usb320". "avr35" "Classic" devices with 16 KiB up to 64 KiB of program memory and with the "MOVW" instruction. mcu = "attiny167", "attiny1634", "atmega8u2", "atmega16u2", "atmega32u2", "ata5505", "ata6617c", "ata664251", "at90usb82", "at90usb162". "avr4" "Enhanced" devices with up to 8 KiB of program memory. mcu = "atmega48", "atmega48a", "atmega48p", "atmega48pa", "atmega48pb", "atmega8", "atmega8a", "atmega8hva", "atmega88", "atmega88a", "atmega88p", "atmega88pa", "atmega88pb", "atmega8515", "atmega8535", "ata6285", "ata6286", "ata6289", "ata6612c", "at90pwm1", "at90pwm2", "at90pwm2b", "at90pwm3", "at90pwm3b", "at90pwm81". "avr5" "Enhanced" devices with 16 KiB up to 64 KiB of program memory. mcu = "atmega16", "atmega16a", "atmega16hva", "atmega16hva2", "atmega16hvb", "atmega16hvbrevb", "atmega16m1", "atmega16u4", "atmega161", "atmega162", "atmega163", "atmega164a", "atmega164p", "atmega164pa", "atmega165", "atmega165a", "atmega165p", "atmega165pa", "atmega168", "atmega168a", "atmega168p", "atmega168pa", "atmega168pb", "atmega169", "atmega169a", "atmega169p", "atmega169pa", "atmega32", "atmega32a", "atmega32c1", "atmega32hvb", "atmega32hvbrevb", "atmega32m1", "atmega32u4", "atmega32u6", "atmega323", "atmega324a", "atmega324p", "atmega324pa", "atmega325", "atmega325a", "atmega325p", "atmega325pa", "atmega328", "atmega328p", "atmega328pb", "atmega329", "atmega329a", "atmega329p", "atmega329pa", "atmega3250", "atmega3250a", "atmega3250p", "atmega3250pa", "atmega3290", "atmega3290a", "atmega3290p", "atmega3290pa", "atmega406", "atmega64", "atmega64a", "atmega64c1", "atmega64hve", "atmega64hve2", "atmega64m1", "atmega64rfr2", "atmega640", "atmega644", "atmega644a", "atmega644p", "atmega644pa", "atmega644rfr2", "atmega645", "atmega645a", "atmega645p", "atmega649", "atmega649a", "atmega649p", "atmega6450", "atmega6450a", "atmega6450p", "atmega6490", "atmega6490a", "atmega6490p", "ata5795", "ata5790", "ata5790n", "ata5791", "ata6613c", "ata6614q", "ata5782", "ata5831", "ata8210", "ata8510", "ata5702m322", "at90pwm161", "at90pwm216", "at90pwm316", "at90can32", "at90can64", "at90scr100", "at90usb646", "at90usb647", "at94k", "m3000". "avr51" "Enhanced" devices with 128 KiB of program memory. mcu = "atmega128", "atmega128a", "atmega128rfa1", "atmega128rfr2", "atmega1280", "atmega1281", "atmega1284", "atmega1284p", "atmega1284rfr2", "at90can128", "at90usb1286", "at90usb1287". "avr6" "Enhanced" devices with 3-byte PC, i.e. with more than 128 KiB of program memory. mcu = "atmega256rfr2", "atmega2560", "atmega2561", "atmega2564rfr2". "avrxmega2" "XMEGA" devices with more than 8 KiB and up to 64 KiB of program memory. mcu = "atxmega8e5", "atxmega16a4", "atxmega16a4u", "atxmega16c4", "atxmega16d4", "atxmega16e5", "atxmega32a4", "atxmega32a4u", "atxmega32c3", "atxmega32c4", "atxmega32d3", "atxmega32d4", "atxmega32e5". "avrxmega3" "XMEGA" devices with up to 64 KiB of combined program memory and RAM, and with program memory visible in the RAM address space. mcu = "attiny202", "attiny204", "attiny212", "attiny214", "attiny402", "attiny404", "attiny406", "attiny412", "attiny414", "attiny416", "attiny417", "attiny804", "attiny806", "attiny807", "attiny814", "attiny816", "attiny817", "attiny1604", "attiny1606", "attiny1607", "attiny1614", "attiny1616", "attiny1617", "attiny3214", "attiny3216", "attiny3217", "atmega808", "atmega809", "atmega1608", "atmega1609", "atmega3208", "atmega3209", "atmega4808", "atmega4809". "avrxmega4" "XMEGA" devices with more than 64 KiB and up to 128 KiB of program memory. mcu = "atxmega64a3", "atxmega64a3u", "atxmega64a4u", "atxmega64b1", "atxmega64b3", "atxmega64c3", "atxmega64d3", "atxmega64d4". "avrxmega5" "XMEGA" devices with more than 64 KiB and up to 128 KiB of program memory and more than 64 KiB of RAM. mcu = "atxmega64a1", "atxmega64a1u". "avrxmega6" "XMEGA" devices with more than 128 KiB of program memory. mcu = "atxmega128a3", "atxmega128a3u", "atxmega128b1", "atxmega128b3", "atxmega128c3", "atxmega128d3", "atxmega128d4", "atxmega192a3", "atxmega192a3u", "atxmega192c3", "atxmega192d3", "atxmega256a3", "atxmega256a3b", "atxmega256a3bu", "atxmega256a3u", "atxmega256c3", "atxmega256d3", "atxmega384c3", "atxmega384d3". "avrxmega7" "XMEGA" devices with more than 128 KiB of program memory and more than 64 KiB of RAM. mcu = "atxmega128a1", "atxmega128a1u", "atxmega128a4u". "avrtiny" "TINY" Tiny core devices with 512 B up to 4 KiB of program memory. mcu = "attiny4", "attiny5", "attiny9", "attiny10", "attiny20", "attiny40". "avr1" This ISA is implemented by the minimal AVR core and supported for assembler only. mcu = "attiny11", "attiny12", "attiny15", "attiny28", "at90s1200". -mabsdata Assume that all data in static storage can be accessed by LDS / STS instructions. This option has only an effect on reduced Tiny devices like ATtiny40. See also the "absdata" AVR Variable Attributes,variable attribute. -maccumulate-args Accumulate outgoing function arguments and acquire/release the needed stack space for outgoing function arguments once in function prologue/epilogue. Without this option, outgoing arguments are pushed before calling a function and popped afterwards. Popping the arguments after the function call can be expensive on AVR so that accumulating the stack space might lead to smaller executables because arguments need not be removed from the stack after such a function call. This option can lead to reduced code size for functions that perform several calls to functions that get their arguments on the stack like calls to printf-like functions. -mbranch-cost=cost Set the branch costs for conditional branch instructions to cost. Reasonable values for cost are small, non-negative integers. The default branch cost is 0. -mcall-prologues Functions prologues/epilogues are expanded as calls to appropriate subroutines. Code size is smaller. -mgas-isr-prologues Interrupt service routines (ISRs) may use the "__gcc_isr" pseudo instruction supported by GNU Binutils. If this option is on, the feature can still be disabled for individual ISRs by means of the AVR Function Attributes,,"no_gccisr" function attribute. This feature is activated per default if optimization is on (but not with -Og, @pxref{Optimize Options}), and if GNU Binutils support PR21683 ("https://sourceware.org/PR21683"). -mint8 Assume "int" to be 8-bit integer. This affects the sizes of all types: a "char" is 1 byte, an "int" is 1 byte, a "long" is 2 bytes, and "long long" is 4 bytes. Please note that this option does not conform to the C standards, but it results in smaller code size. -mmain-is-OS_task Do not save registers in "main". The effect is the same like attaching attribute AVR Function Attributes,,"OS_task" to "main". It is activated per default if optimization is on. -mn-flash=num Assume that the flash memory has a size of num times 64 KiB. -mno-interrupts Generated code is not compatible with hardware interrupts. Code size is smaller. -mrelax Try to replace "CALL" resp. "JMP" instruction by the shorter "RCALL" resp. "RJMP" instruction if applicable. Setting -mrelax just adds the --mlink-relax option to the assembler's command line and the --relax option to the linker's command line. Jump relaxing is performed by the linker because jump offsets are not known before code is located. Therefore, the assembler code generated by the compiler is the same, but the instructions in the executable may differ from instructions in the assembler code. Relaxing must be turned on if linker stubs are needed, see the section on "EIND" and linker stubs below. -mrmw Assume that the device supports the Read-Modify-Write instructions "XCH", "LAC", "LAS" and "LAT". -mshort-calls Assume that "RJMP" and "RCALL" can target the whole program memory. This option is used internally for multilib selection. It is not an optimization option, and you don't need to set it by hand. -msp8 Treat the stack pointer register as an 8-bit register, i.e. assume the high byte of the stack pointer is zero. In general, you don't need to set this option by hand. This option is used internally by the compiler to select and build multilibs for architectures "avr2" and "avr25". These architectures mix devices with and without "SPH". For any setting other than -mmcu=avr2 or -mmcu=avr25 the compiler driver adds or removes this option from the compiler proper's command line, because the compiler then knows if the device or architecture has an 8-bit stack pointer and thus no "SPH" register or not. -mstrict-X Use address register "X" in a way proposed by the hardware. This means that "X" is only used in indirect, post-increment or pre-decrement addressing. Without this option, the "X" register may be used in the same way as "Y" or "Z" which then is emulated by additional instructions. For example, loading a value with "X+const" addressing with a small non-negative "const < 64" to a register Rn is performed as adiw r26, const ; X += const ld <Rn>, X ; <Rn> = *X sbiw r26, const ; X -= const -mtiny-stack Only change the lower 8 bits of the stack pointer. -mfract-convert-truncate Allow to use truncation instead of rounding towards zero for fractional fixed-point types. -nodevicelib Don't link against AVR-LibC's device specific library "lib<mcu>.a". -nodevicespecs Don't add -specs=device-specs/specs-<mcu> to the compiler driver's command line. The user takes responsibility for supplying the sub-processes like compiler proper, assembler and linker with appropriate command line options. -Waddr-space-convert Warn about conversions between address spaces in the case where the resulting address space is not contained in the incoming address space. -Wmisspelled-isr Warn if the ISR is misspelled, i.e. without __vector prefix. Enabled by default. "EIND" and Devices with More Than 128 Ki Bytes of Flash Pointers in the implementation are 16 bits wide. The address of a function or label is represented as word address so that indirect jumps and calls can target any code address in the range of 64 Ki words. In order to facilitate indirect jump on devices with more than 128 Ki bytes of program memory space, there is a special function register called "EIND" that serves as most significant part of the target address when "EICALL" or "EIJMP" instructions are used. Indirect jumps and calls on these devices are handled as follows by the compiler and are subject to some limitations: * The compiler never sets "EIND". * The compiler uses "EIND" implicitly in "EICALL"/"EIJMP" instructions or might read "EIND" directly in order to emulate an indirect call/jump by means of a "RET" instruction. * The compiler assumes that "EIND" never changes during the startup code or during the application. In particular, "EIND" is not saved/restored in function or interrupt service routine prologue/epilogue. * For indirect calls to functions and computed goto, the linker generates stubs. Stubs are jump pads sometimes also called trampolines. Thus, the indirect call/jump jumps to such a stub. The stub contains a direct jump to the desired address. * Linker relaxation must be turned on so that the linker generates the stubs correctly in all situations. See the compiler option -mrelax and the linker option --relax. There are corner cases where the linker is supposed to generate stubs but aborts without relaxation and without a helpful error message. * The default linker script is arranged for code with "EIND = 0". If code is supposed to work for a setup with "EIND != 0", a custom linker script has to be used in order to place the sections whose name start with ".trampolines" into the segment where "EIND" points to. * The startup code from libgcc never sets "EIND". Notice that startup code is a blend of code from libgcc and AVR-LibC. For the impact of AVR-LibC on "EIND", see the AVR- LibC user manual ("http://nongnu.org/avr-libc/user-manual/"). * It is legitimate for user-specific startup code to set up "EIND" early, for example by means of initialization code located in section ".init3". Such code runs prior to general startup code that initializes RAM and calls constructors, but after the bit of startup code from AVR-LibC that sets "EIND" to the segment where the vector table is located. #include <avr/io.h> static void __attribute__((section(".init3"),naked,used,no_instrument_function)) init3_set_eind (void) { __asm volatile ("ldi r24,pm_hh8(__trampolines_start)\n\t" "out %i0,r24" :: "n" (&EIND) : "r24","memory"); } The "__trampolines_start" symbol is defined in the linker script. * Stubs are generated automatically by the linker if the following two conditions are met: -<The address of a label is taken by means of the "gs" modifier> (short for generate stubs) like so: LDI r24, lo8(gs(<func>)) LDI r25, hi8(gs(<func>)) -<The final location of that label is in a code segment> outside the segment where the stubs are located. * The compiler emits such "gs" modifiers for code labels in the following situations: -<Taking address of a function or code label.> -<Computed goto.> -<If prologue-save function is used, see -mcall-prologues> command-line option. -<Switch/case dispatch tables. If you do not want such dispatch> tables you can specify the -fno-jump-tables command-line option. -<C and C++ constructors/destructors called during startup/shutdown.> -<If the tools hit a "gs()" modifier explained above.> * Jumping to non-symbolic addresses like so is not supported: int main (void) { /* Call function at word address 0x2 */ return ((int(*)(void)) 0x2)(); } Instead, a stub has to be set up, i.e. the function has to be called through a symbol ("func_4" in the example): int main (void) { extern int func_4 (void); /* Call function at byte address 0x4 */ return func_4(); } and the application be linked with -Wl,--defsym,func_4=0x4. Alternatively, "func_4" can be defined in the linker script. Handling of the "RAMPD", "RAMPX", "RAMPY" and "RAMPZ" Special Function Registers Some AVR devices support memories larger than the 64 KiB range that can be accessed with 16-bit pointers. To access memory locations outside this 64 KiB range, the content of a "RAMP" register is used as high part of the address: The "X", "Y", "Z" address register is concatenated with the "RAMPX", "RAMPY", "RAMPZ" special function register, respectively, to get a wide address. Similarly, "RAMPD" is used together with direct addressing. * The startup code initializes the "RAMP" special function registers with zero. * If a AVR Named Address Spaces,named address space other than generic or "__flash" is used, then "RAMPZ" is set as needed before the operation. * If the device supports RAM larger than 64 KiB and the compiler needs to change "RAMPZ" to accomplish an operation, "RAMPZ" is reset to zero after the operation. * If the device comes with a specific "RAMP" register, the ISR prologue/epilogue saves/restores that SFR and initializes it with zero in case the ISR code might (implicitly) use it. * RAM larger than 64 KiB is not supported by GCC for AVR targets. If you use inline assembler to read from locations outside the 16-bit address range and change one of the "RAMP" registers, you must reset it to zero after the access. AVR Built-in Macros GCC defines several built-in macros so that the user code can test for the presence or absence of features. Almost any of the following built-in macros are deduced from device capabilities and thus triggered by the -mmcu= command-line option. For even more AVR-specific built-in macros see AVR Named Address Spaces and AVR Built-in Functions. "__AVR_ARCH__" Build-in macro that resolves to a decimal number that identifies the architecture and depends on the -mmcu=mcu option. Possible values are: 2, 25, 3, 31, 35, 4, 5, 51, 6 for mcu="avr2", "avr25", "avr3", "avr31", "avr35", "avr4", "avr5", "avr51", "avr6", respectively and 100, 102, 103, 104, 105, 106, 107 for mcu="avrtiny", "avrxmega2", "avrxmega3", "avrxmega4", "avrxmega5", "avrxmega6", "avrxmega7", respectively. If mcu specifies a device, this built-in macro is set accordingly. For example, with -mmcu=atmega8 the macro is defined to 4. "__AVR_Device__" Setting -mmcu=device defines this built-in macro which reflects the device's name. For example, -mmcu=atmega8 defines the built-in macro "__AVR_ATmega8__", -mmcu=attiny261a defines "__AVR_ATtiny261A__", etc. The built-in macros' names follow the scheme "__AVR_Device__" where Device is the device name as from the AVR user manual. The difference between Device in the built-in macro and device in -mmcu=device is that the latter is always lowercase. If device is not a device but only a core architecture like avr51, this macro is not defined. "__AVR_DEVICE_NAME__" Setting -mmcu=device defines this built-in macro to the device's name. For example, with -mmcu=atmega8 the macro is defined to "atmega8". If device is not a device but only a core architecture like avr51, this macro is not defined. "__AVR_XMEGA__" The device / architecture belongs to the XMEGA family of devices. "__AVR_HAVE_ELPM__" The device has the "ELPM" instruction. "__AVR_HAVE_ELPMX__" The device has the "ELPM Rn,Z" and "ELPM Rn,Z+" instructions. "__AVR_HAVE_MOVW__" The device has the "MOVW" instruction to perform 16-bit register-register moves. "__AVR_HAVE_LPMX__" The device has the "LPM Rn,Z" and "LPM Rn,Z+" instructions. "__AVR_HAVE_MUL__" The device has a hardware multiplier. "__AVR_HAVE_JMP_CALL__" The device has the "JMP" and "CALL" instructions. This is the case for devices with more than 8 KiB of program memory. "__AVR_HAVE_EIJMP_EICALL__" "__AVR_3_BYTE_PC__" The device has the "EIJMP" and "EICALL" instructions. This is the case for devices with more than 128 KiB of program memory. This also means that the program counter (PC) is 3 bytes wide. "__AVR_2_BYTE_PC__" The program counter (PC) is 2 bytes wide. This is the case for devices with up to 128 KiB of program memory. "__AVR_HAVE_8BIT_SP__" "__AVR_HAVE_16BIT_SP__" The stack pointer (SP) register is treated as 8-bit respectively 16-bit register by the compiler. The definition of these macros is affected by -mtiny-stack. "__AVR_HAVE_SPH__" "__AVR_SP8__" The device has the SPH (high part of stack pointer) special function register or has an 8-bit stack pointer, respectively. The definition of these macros is affected by -mmcu= and in the cases of -mmcu=avr2 and -mmcu=avr25 also by -msp8. "__AVR_HAVE_RAMPD__" "__AVR_HAVE_RAMPX__" "__AVR_HAVE_RAMPY__" "__AVR_HAVE_RAMPZ__" The device has the "RAMPD", "RAMPX", "RAMPY", "RAMPZ" special function register, respectively. "__NO_INTERRUPTS__" This macro reflects the -mno-interrupts command-line option. "__AVR_ERRATA_SKIP__" "__AVR_ERRATA_SKIP_JMP_CALL__" Some AVR devices (AT90S8515, ATmega103) must not skip 32-bit instructions because of a hardware erratum. Skip instructions are "SBRS", "SBRC", "SBIS", "SBIC" and "CPSE". The second macro is only defined if "__AVR_HAVE_JMP_CALL__" is also set. "__AVR_ISA_RMW__" The device has Read-Modify-Write instructions (XCH, LAC, LAS and LAT). "__AVR_SFR_OFFSET__=offset" Instructions that can address I/O special function registers directly like "IN", "OUT", "SBI", etc. may use a different address as if addressed by an instruction to access RAM like "LD" or "STS". This offset depends on the device architecture and has to be subtracted from the RAM address in order to get the respective I/O address. "__AVR_SHORT_CALLS__" The -mshort-calls command line option is set. "__AVR_PM_BASE_ADDRESS__=addr" Some devices support reading from flash memory by means of "LD*" instructions. The flash memory is seen in the data address space at an offset of "__AVR_PM_BASE_ADDRESS__". If this macro is not defined, this feature is not available. If defined, the address space is linear and there is no need to put ".rodata" into RAM. This is handled by the default linker description file, and is currently available for "avrtiny" and "avrxmega3". Even more convenient, there is no need to use address spaces like "__flash" or features like attribute "progmem" and "pgm_read_*". "__WITH_AVRLIBC__" The compiler is configured to be used together with AVR-Libc. See the --with-avrlibc configure option. Blackfin Options -mcpu=cpu[-sirevision] Specifies the name of the target Blackfin processor. Currently, cpu can be one of bf512, bf514, bf516, bf518, bf522, bf523, bf524, bf525, bf526, bf527, bf531, bf532, bf533, bf534, bf536, bf537, bf538, bf539, bf542, bf544, bf547, bf548, bf549, bf542m, bf544m, bf547m, bf548m, bf549m, bf561, bf592. The optional sirevision specifies the silicon revision of the target Blackfin processor. Any workarounds available for the targeted silicon revision are enabled. If sirevision is none, no workarounds are enabled. If sirevision is any, all workarounds for the targeted processor are enabled. The "__SILICON_REVISION__" macro is defined to two hexadecimal digits representing the major and minor numbers in the silicon revision. If sirevision is none, the "__SILICON_REVISION__" is not defined. If sirevision is any, the "__SILICON_REVISION__" is defined to be 0xffff. If this optional sirevision is not used, GCC assumes the latest known silicon revision of the targeted Blackfin processor. GCC defines a preprocessor macro for the specified cpu. For the bfin-elf toolchain, this option causes the hardware BSP provided by libgloss to be linked in if -msim is not given. Without this option, bf532 is used as the processor by default. Note that support for bf561 is incomplete. For bf561, only the preprocessor macro is defined. -msim Specifies that the program will be run on the simulator. This causes the simulator BSP provided by libgloss to be linked in. This option has effect only for bfin-elf toolchain. Certain other options, such as -mid-shared-library and -mfdpic, imply -msim. -momit-leaf-frame-pointer Don't keep the frame pointer in a register for leaf functions. This avoids the instructions to save, set up and restore frame pointers and makes an extra register available in leaf functions. -mspecld-anomaly When enabled, the compiler ensures that the generated code does not contain speculative loads after jump instructions. If this option is used, "__WORKAROUND_SPECULATIVE_LOADS" is defined. -mno-specld-anomaly Don't generate extra code to prevent speculative loads from occurring. -mcsync-anomaly When enabled, the compiler ensures that the generated code does not contain CSYNC or SSYNC instructions too soon after conditional branches. If this option is used, "__WORKAROUND_SPECULATIVE_SYNCS" is defined. -mno-csync-anomaly Don't generate extra code to prevent CSYNC or SSYNC instructions from occurring too soon after a conditional branch. -mlow64k When enabled, the compiler is free to take advantage of the knowledge that the entire program fits into the low 64k of memory. -mno-low64k Assume that the program is arbitrarily large. This is the default. -mstack-check-l1 Do stack checking using information placed into L1 scratchpad memory by the uClinux kernel. -mid-shared-library Generate code that supports shared libraries via the library ID method. This allows for execute in place and shared libraries in an environment without virtual memory management. This option implies -fPIC. With a bfin-elf target, this option implies -msim. -mno-id-shared-library Generate code that doesn't assume ID-based shared libraries are being used. This is the default. -mleaf-id-shared-library Generate code that supports shared libraries via the library ID method, but assumes that this library or executable won't link against any other ID shared libraries. That allows the compiler to use faster code for jumps and calls. -mno-leaf-id-shared-library Do not assume that the code being compiled won't link against any ID shared libraries. Slower code is generated for jump and call insns. -mshared-library-id=n Specifies the identification number of the ID-based shared library being compiled. Specifying a value of 0 generates more compact code; specifying other values forces the allocation of that number to the current library but is no more space- or time-efficient than omitting this option. -msep-data Generate code that allows the data segment to be located in a different area of memory from the text segment. This allows for execute in place in an environment without virtual memory management by eliminating relocations against the text section. -mno-sep-data Generate code that assumes that the data segment follows the text segment. This is the default. -mlong-calls -mno-long-calls Tells the compiler to perform function calls by first loading the address of the function into a register and then performing a subroutine call on this register. This switch is needed if the target function lies outside of the 24-bit addressing range of the offset-based version of subroutine call instruction. This feature is not enabled by default. Specifying -mno-long-calls restores the default behavior. Note these switches have no effect on how the compiler generates code to handle function calls via function pointers. -mfast-fp Link with the fast floating-point library. This library relaxes some of the IEEE floating-point standard's rules for checking inputs against Not-a-Number (NAN), in the interest of performance. -minline-plt Enable inlining of PLT entries in function calls to functions that are not known to bind locally. It has no effect without -mfdpic. -mmulticore Build a standalone application for multicore Blackfin processors. This option causes proper start files and link scripts supporting multicore to be used, and defines the macro "__BFIN_MULTICORE". It can only be used with -mcpu=bf561[-sirevision]. This option can be used with -mcorea or -mcoreb, which selects the one-application-per-core programming model. Without -mcorea or -mcoreb, the single-application/dual-core programming model is used. In this model, the main function of Core B should be named as "coreb_main". If this option is not used, the single-core application programming model is used. -mcorea Build a standalone application for Core A of BF561 when using the one-application-per-core programming model. Proper start files and link scripts are used to support Core A, and the macro "__BFIN_COREA" is defined. This option can only be used in conjunction with -mmulticore. -mcoreb Build a standalone application for Core B of BF561 when using the one-application-per-core programming model. Proper start files and link scripts are used to support Core B, and the macro "__BFIN_COREB" is defined. When this option is used, "coreb_main" should be used instead of "main". This option can only be used in conjunction with -mmulticore. -msdram Build a standalone application for SDRAM. Proper start files and link scripts are used to put the application into SDRAM, and the macro "__BFIN_SDRAM" is defined. The loader should initialize SDRAM before loading the application. -micplb Assume that ICPLBs are enabled at run time. This has an effect on certain anomaly workarounds. For Linux targets, the default is to assume ICPLBs are enabled; for standalone applications the default is off. C6X Options -march=name This specifies the name of the target architecture. GCC uses this name to determine what kind of instructions it can emit when generating assembly code. Permissible names are: c62x, c64x, c64x+, c67x, c67x+, c674x. -mbig-endian Generate code for a big-endian target. -mlittle-endian Generate code for a little-endian target. This is the default. -msim Choose startup files and linker script suitable for the simulator. -msdata=default Put small global and static data in the ".neardata" section, which is pointed to by register "B14". Put small uninitialized global and static data in the ".bss" section, which is adjacent to the ".neardata" section. Put small read-only data into the ".rodata" section. The corresponding sections used for large pieces of data are ".fardata", ".far" and ".const". -msdata=all Put all data, not just small objects, into the sections reserved for small data, and use addressing relative to the "B14" register to access them. -msdata=none Make no use of the sections reserved for small data, and use absolute addresses to access all data. Put all initialized global and static data in the ".fardata" section, and all uninitialized data in the ".far" section. Put all constant data into the ".const" section. CRIS Options These options are defined specifically for the CRIS ports. -march=architecture-type -mcpu=architecture-type Generate code for the specified architecture. The choices for architecture-type are v3, v8 and v10 for respectively ETRAX 4, ETRAX 100, and ETRAX 100 LX. Default is v0 except for cris-axis-linux-gnu, where the default is v10. -mtune=architecture-type Tune to architecture-type everything applicable about the generated code, except for the ABI and the set of available instructions. The choices for architecture-type are the same as for -march=architecture-type. -mmax-stack-frame=n Warn when the stack frame of a function exceeds n bytes. -metrax4 -metrax100 The options -metrax4 and -metrax100 are synonyms for -march=v3 and -march=v8 respectively. -mmul-bug-workaround -mno-mul-bug-workaround Work around a bug in the "muls" and "mulu" instructions for CPU models where it applies. This option is active by default. -mpdebug Enable CRIS-specific verbose debug-related information in the assembly code. This option also has the effect of turning off the #NO_APP formatted-code indicator to the assembler at the beginning of the assembly file. -mcc-init Do not use condition-code results from previous instruction; always emit compare and test instructions before use of condition codes. -mno-side-effects Do not emit instructions with side effects in addressing modes other than post-increment. -mstack-align -mno-stack-align -mdata-align -mno-data-align -mconst-align -mno-const-align These options (no- options) arrange (eliminate arrangements) for the stack frame, individual data and constants to be aligned for the maximum single data access size for the chosen CPU model. The default is to arrange for 32-bit alignment. ABI details such as structure layout are not affected by these options. -m32-bit -m16-bit -m8-bit Similar to the stack- data- and const-align options above, these options arrange for stack frame, writable data and constants to all be 32-bit, 16-bit or 8-bit aligned. The default is 32-bit alignment. -mno-prologue-epilogue -mprologue-epilogue With -mno-prologue-epilogue, the normal function prologue and epilogue which set up the stack frame are omitted and no return instructions or return sequences are generated in the code. Use this option only together with visual inspection of the compiled code: no warnings or errors are generated when call-saved registers must be saved, or storage for local variables needs to be allocated. -mno-gotplt -mgotplt With -fpic and -fPIC, don't generate (do generate) instruction sequences that load addresses for functions from the PLT part of the GOT rather than (traditional on other architectures) calls to the PLT. The default is -mgotplt. -melf Legacy no-op option only recognized with the cris-axis-elf and cris-axis-linux-gnu targets. -mlinux Legacy no-op option only recognized with the cris-axis-linux- gnu target. -sim This option, recognized for the cris-axis-elf, arranges to link with input-output functions from a simulator library. Code, initialized data and zero-initialized data are allocated consecutively. -sim2 Like -sim, but pass linker options to locate initialized data at 0x40000000 and zero-initialized data at 0x80000000. CR16 Options These options are defined specifically for the CR16 ports. -mmac Enable the use of multiply-accumulate instructions. Disabled by default. -mcr16cplus -mcr16c Generate code for CR16C or CR16C+ architecture. CR16C+ architecture is default. -msim Links the library libsim.a which is in compatible with simulator. Applicable to ELF compiler only. -mint32 Choose integer type as 32-bit wide. -mbit-ops Generates "sbit"/"cbit" instructions for bit manipulations. -mdata-model=model Choose a data model. The choices for model are near, far or medium. medium is default. However, far is not valid with -mcr16c, as the CR16C architecture does not support the far data model. C-SKY Options GCC supports these options when compiling for C-SKY V2 processors. -march=arch Specify the C-SKY target architecture. Valid values for arch are: ck801, ck802, ck803, ck807, and ck810. The default is ck810. -mcpu=cpu Specify the C-SKY target processor. Valid values for cpu are: ck801, ck801t, ck802, ck802t, ck802j, ck803, ck803h, ck803t, ck803ht, ck803f, ck803fh, ck803e, ck803eh, ck803et, ck803eht, ck803ef, ck803efh, ck803ft, ck803eft, ck803efht, ck803r1, ck803hr1, ck803tr1, ck803htr1, ck803fr1, ck803fhr1, ck803er1, ck803ehr1, ck803etr1, ck803ehtr1, ck803efr1, ck803efhr1, ck803ftr1, ck803eftr1, ck803efhtr1, ck803s, ck803st, ck803se, ck803sf, ck803sef, ck803seft, ck807e, ck807ef, ck807, ck807f, ck810e, ck810et, ck810ef, ck810eft, ck810, ck810v, ck810f, ck810t, ck810fv, ck810tv, ck810ft, and ck810ftv. -mbig-endian -EB -mlittle-endian -EL Select big- or little-endian code. The default is little- endian. -mhard-float -msoft-float Select hardware or software floating-point implementations. The default is soft float. -mdouble-float -mno-double-float When -mhard-float is in effect, enable generation of double- precision float instructions. This is the default except when compiling for CK803. -mfdivdu -mno-fdivdu When -mhard-float is in effect, enable generation of "frecipd", "fsqrtd", and "fdivd" instructions. This is the default except when compiling for CK803. -mfpu=fpu Select the floating-point processor. This option can only be used with -mhard-float. Values for fpu are fpv2_sf (equivalent to -mno-double-float -mno-fdivdu), fpv2 (-mdouble-float -mno-divdu), and fpv2_divd (-mdouble-float -mdivdu). -melrw -mno-elrw Enable the extended "lrw" instruction. This option defaults to on for CK801 and off otherwise. -mistack -mno-istack Enable interrupt stack instructions; the default is off. The -mistack option is required to handle the "interrupt" and "isr" function attributes. -mmp Enable multiprocessor instructions; the default is off. -mcp Enable coprocessor instructions; the default is off. -mcache Enable coprocessor instructions; the default is off. -msecurity Enable C-SKY security instructions; the default is off. -mtrust Enable C-SKY trust instructions; the default is off. -mdsp -medsp -mvdsp Enable C-SKY DSP, Enhanced DSP, or Vector DSP instructions, respectively. All of these options default to off. -mdiv -mno-div Generate divide instructions. Default is off. -msmart -mno-smart Generate code for Smart Mode, using only registers numbered 0-7 to allow use of 16-bit instructions. This option is ignored for CK801 where this is the required behavior, and it defaults to on for CK802. For other targets, the default is off. -mhigh-registers -mno-high-registers Generate code using the high registers numbered 16-31. This option is not supported on CK801, CK802, or CK803, and is enabled by default for other processors. -manchor -mno-anchor Generate code using global anchor symbol addresses. -mpushpop -mno-pushpop Generate code using "push" and "pop" instructions. This option defaults to on. -mmultiple-stld -mstm -mno-multiple-stld -mno-stm Generate code using "stm" and "ldm" instructions. This option isn't supported on CK801 but is enabled by default on other processors. -mconstpool -mno-constpool Create constant pools in the compiler instead of deferring it to the assembler. This option is the default and required for correct code generation on CK801 and CK802, and is optional on other processors. -mstack-size -mno-stack-size Emit ".stack_size" directives for each function in the assembly output. This option defaults to off. -mccrt -mno-ccrt Generate code for the C-SKY compiler runtime instead of libgcc. This option defaults to off. -mbranch-cost=n Set the branch costs to roughly "n" instructions. The default is 1. -msched-prolog -mno-sched-prolog Permit scheduling of function prologue and epilogue sequences. Using this option can result in code that is not compliant with the C-SKY V2 ABI prologue requirements and that cannot be debugged or backtraced. It is disabled by default. Darwin Options These options are defined for all architectures running the Darwin operating system. FSF GCC on Darwin does not create "fat" object files; it creates an object file for the single architecture that GCC was built to target. Apple's GCC on Darwin does create "fat" files if multiple -arch options are used; it does so by running the compiler or linker multiple times and joining the results together with lipo. The subtype of the file created (like ppc7400 or ppc970 or i686) is determined by the flags that specify the ISA that GCC is targeting, like -mcpu or -march. The -force_cpusubtype_ALL option can be used to override this. The Darwin tools vary in their behavior when presented with an ISA mismatch. The assembler, as, only permits instructions to be used that are valid for the subtype of the file it is generating, so you cannot put 64-bit instructions in a ppc750 object file. The linker for shared libraries, /usr/bin/libtool, fails and prints an error if asked to create a shared library with a less restrictive subtype than its input files (for instance, trying to put a ppc970 object file in a ppc7400 library). The linker for executables, ld, quietly gives the executable the most restrictive subtype of any of its input files. -Fdir Add the framework directory dir to the head of the list of directories to be searched for header files. These directories are interleaved with those specified by -I options and are scanned in a left-to-right order. A framework directory is a directory with frameworks in it. A framework is a directory with a Headers and/or PrivateHeaders directory contained directly in it that ends in .framework. The name of a framework is the name of this directory excluding the .framework. Headers associated with the framework are found in one of those two directories, with Headers being searched first. A subframework is a framework directory that is in a framework's Frameworks directory. Includes of subframework headers can only appear in a header of a framework that contains the subframework, or in a sibling subframework header. Two subframeworks are siblings if they occur in the same framework. A subframework should not have the same name as a framework; a warning is issued if this is violated. Currently a subframework cannot have subframeworks; in the future, the mechanism may be extended to support this. The standard frameworks can be found in /System/Library/Frameworks and /Library/Frameworks. An example include looks like "#include <Framework/header.h>", where Framework denotes the name of the framework and header.h is found in the PrivateHeaders or Headers directory. -iframeworkdir Like -F except the directory is a treated as a system directory. The main difference between this -iframework and -F is that with -iframework the compiler does not warn about constructs contained within header files found via dir. This option is valid only for the C family of languages. -gused Emit debugging information for symbols that are used. For stabs debugging format, this enables -feliminate-unused-debug-symbols. This is by default ON. -gfull Emit debugging information for all symbols and types. -mmacosx-version-min=version The earliest version of MacOS X that this executable will run on is version. Typical values of version include 10.1, 10.2, and 10.3.9. If the compiler was built to use the system's headers by default, then the default for this option is the system version on which the compiler is running, otherwise the default is to make choices that are compatible with as many systems and code bases as possible. -mkernel Enable kernel development mode. The -mkernel option sets -static, -fno-common, -fno-use-cxa-atexit, -fno-exceptions, -fno-non-call-exceptions, -fapple-kext, -fno-weak and -fno-rtti where applicable. This mode also sets -mno-altivec, -msoft-float, -fno-builtin and -mlong-branch for PowerPC targets. -mone-byte-bool Override the defaults for "bool" so that "sizeof(bool)==1". By default "sizeof(bool)" is 4 when compiling for Darwin/PowerPC and 1 when compiling for Darwin/x86, so this option has no effect on x86. Warning: The -mone-byte-bool switch causes GCC to generate code that is not binary compatible with code generated without that switch. Using this switch may require recompiling all other modules in a program, including system libraries. Use this switch to conform to a non-default data model. -mfix-and-continue -ffix-and-continue -findirect-data Generate code suitable for fast turnaround development, such as to allow GDB to dynamically load .o files into already- running programs. -findirect-data and -ffix-and-continue are provided for backwards compatibility. -all_load Loads all members of static archive libraries. See man ld(1) for more information. -arch_errors_fatal Cause the errors having to do with files that have the wrong architecture to be fatal. -bind_at_load Causes the output file to be marked such that the dynamic linker will bind all undefined references when the file is loaded or launched. -bundle Produce a Mach-o bundle format file. See man ld(1) for more information. -bundle_loader executable This option specifies the executable that will load the build output file being linked. See man ld(1) for more information. -dynamiclib When passed this option, GCC produces a dynamic library instead of an executable when linking, using the Darwin libtool command. -force_cpusubtype_ALL This causes GCC's output file to have the ALL subtype, instead of one controlled by the -mcpu or -march option. -allowable_client client_name -client_name -compatibility_version -current_version -dead_strip -dependency-file -dylib_file -dylinker_install_name -dynamic -exported_symbols_list -filelist -flat_namespace -force_flat_namespace -headerpad_max_install_names -image_base -init -install_name -keep_private_externs -multi_module -multiply_defined -multiply_defined_unused -noall_load -no_dead_strip_inits_and_terms -nofixprebinding -nomultidefs -noprebind -noseglinkedit -pagezero_size -prebind -prebind_all_twolevel_modules -private_bundle -read_only_relocs -sectalign -sectobjectsymbols -whyload -seg1addr -sectcreate -sectobjectsymbols -sectorder -segaddr -segs_read_only_addr -segs_read_write_addr -seg_addr_table -seg_addr_table_filename -seglinkedit -segprot -segs_read_only_addr -segs_read_write_addr -single_module -static -sub_library -sub_umbrella -twolevel_namespace -umbrella -undefined -unexported_symbols_list -weak_reference_mismatches -whatsloaded These options are passed to the Darwin linker. The Darwin linker man page describes them in detail. DEC Alpha Options These -m options are defined for the DEC Alpha implementations: -mno-soft-float -msoft-float Use (do not use) the hardware floating-point instructions for floating-point operations. When -msoft-float is specified, functions in libgcc.a are used to perform floating-point operations. Unless they are replaced by routines that emulate the floating-point operations, or compiled in such a way as to call such emulations routines, these routines issue floating-point operations. If you are compiling for an Alpha without floating-point operations, you must ensure that the library is built so as not to call them. Note that Alpha implementations without floating-point operations are required to have floating-point registers. -mfp-reg -mno-fp-regs Generate code that uses (does not use) the floating-point register set. -mno-fp-regs implies -msoft-float. If the floating-point register set is not used, floating-point operands are passed in integer registers as if they were integers and floating-point results are passed in $0 instead of $f0. This is a non-standard calling sequence, so any function with a floating-point argument or return value called by code compiled with -mno-fp-regs must also be compiled with that option. A typical use of this option is building a kernel that does not use, and hence need not save and restore, any floating- point registers. -mieee The Alpha architecture implements floating-point hardware optimized for maximum performance. It is mostly compliant with the IEEE floating-point standard. However, for full compliance, software assistance is required. This option generates code fully IEEE-compliant code except that the inexact-flag is not maintained (see below). If this option is turned on, the preprocessor macro "_IEEE_FP" is defined during compilation. The resulting code is less efficient but is able to correctly support denormalized numbers and exceptional IEEE values such as not-a-number and plus/minus infinity. Other Alpha compilers call this option -ieee_with_no_inexact. -mieee-with-inexact This is like -mieee except the generated code also maintains the IEEE inexact-flag. Turning on this option causes the generated code to implement fully-compliant IEEE math. In addition to "_IEEE_FP", "_IEEE_FP_EXACT" is defined as a preprocessor macro. On some Alpha implementations the resulting code may execute significantly slower than the code generated by default. Since there is very little code that depends on the inexact-flag, you should normally not specify this option. Other Alpha compilers call this option -ieee_with_inexact. -mfp-trap-mode=trap-mode This option controls what floating-point related traps are enabled. Other Alpha compilers call this option -fptm trap- mode. The trap mode can be set to one of four values: n This is the default (normal) setting. The only traps that are enabled are the ones that cannot be disabled in software (e.g., division by zero trap). u In addition to the traps enabled by n, underflow traps are enabled as well. su Like u, but the instructions are marked to be safe for software completion (see Alpha architecture manual for details). sui Like su, but inexact traps are enabled as well. -mfp-rounding-mode=rounding-mode Selects the IEEE rounding mode. Other Alpha compilers call this option -fprm rounding-mode. The rounding-mode can be one of: n Normal IEEE rounding mode. Floating-point numbers are rounded towards the nearest machine number or towards the even machine number in case of a tie. m Round towards minus infinity. c Chopped rounding mode. Floating-point numbers are rounded towards zero. d Dynamic rounding mode. A field in the floating-point control register (fpcr, see Alpha architecture reference manual) controls the rounding mode in effect. The C library initializes this register for rounding towards plus infinity. Thus, unless your program modifies the fpcr, d corresponds to round towards plus infinity. -mtrap-precision=trap-precision In the Alpha architecture, floating-point traps are imprecise. This means without software assistance it is impossible to recover from a floating trap and program execution normally needs to be terminated. GCC can generate code that can assist operating system trap handlers in determining the exact location that caused a floating-point trap. Depending on the requirements of an application, different levels of precisions can be selected: p Program precision. This option is the default and means a trap handler can only identify which program caused a floating-point exception. f Function precision. The trap handler can determine the function that caused a floating-point exception. i Instruction precision. The trap handler can determine the exact instruction that caused a floating-point exception. Other Alpha compilers provide the equivalent options called -scope_safe and -resumption_safe. -mieee-conformant This option marks the generated code as IEEE conformant. You must not use this option unless you also specify -mtrap-precision=i and either -mfp-trap-mode=su or -mfp-trap-mode=sui. Its only effect is to emit the line .eflag 48 in the function prologue of the generated assembly file. -mbuild-constants Normally GCC examines a 32- or 64-bit integer constant to see if it can construct it from smaller constants in two or three instructions. If it cannot, it outputs the constant as a literal and generates code to load it from the data segment at run time. Use this option to require GCC to construct all integer constants using code, even if it takes more instructions (the maximum is six). You typically use this option to build a shared library dynamic loader. Itself a shared library, it must relocate itself in memory before it can find the variables and constants in its own data segment. -mbwx -mno-bwx -mcix -mno-cix -mfix -mno-fix -mmax -mno-max Indicate whether GCC should generate code to use the optional BWX, CIX, FIX and MAX instruction sets. The default is to use the instruction sets supported by the CPU type specified via -mcpu= option or that of the CPU on which GCC was built if none is specified. -mfloat-vax -mfloat-ieee Generate code that uses (does not use) VAX F and G floating- point arithmetic instead of IEEE single and double precision. -mexplicit-relocs -mno-explicit-relocs Older Alpha assemblers provided no way to generate symbol relocations except via assembler macros. Use of these macros does not allow optimal instruction scheduling. GNU binutils as of version 2.12 supports a new syntax that allows the compiler to explicitly mark which relocations should apply to which instructions. This option is mostly useful for debugging, as GCC detects the capabilities of the assembler when it is built and sets the default accordingly. -msmall-data -mlarge-data When -mexplicit-relocs is in effect, static data is accessed via gp-relative relocations. When -msmall-data is used, objects 8 bytes long or smaller are placed in a small data area (the ".sdata" and ".sbss" sections) and are accessed via 16-bit relocations off of the $gp register. This limits the size of the small data area to 64KB, but allows the variables to be directly accessed via a single instruction. The default is -mlarge-data. With this option the data area is limited to just below 2GB. Programs that require more than 2GB of data must use "malloc" or "mmap" to allocate the data in the heap instead of in the program's data segment. When generating code for shared libraries, -fpic implies -msmall-data and -fPIC implies -mlarge-data. -msmall-text -mlarge-text When -msmall-text is used, the compiler assumes that the code of the entire program (or shared library) fits in 4MB, and is thus reachable with a branch instruction. When -msmall-data is used, the compiler can assume that all local symbols share the same $gp value, and thus reduce the number of instructions required for a function call from 4 to 1. The default is -mlarge-text. -mcpu=cpu_type Set the instruction set and instruction scheduling parameters for machine type cpu_type. You can specify either the EV style name or the corresponding chip number. GCC supports scheduling parameters for the EV4, EV5 and EV6 family of processors and chooses the default values for the instruction set from the processor you specify. If you do not specify a processor type, GCC defaults to the processor on which the compiler was built. Supported values for cpu_type are ev4 ev45 21064 Schedules as an EV4 and has no instruction set extensions. ev5 21164 Schedules as an EV5 and has no instruction set extensions. ev56 21164a Schedules as an EV5 and supports the BWX extension. pca56 21164pc 21164PC Schedules as an EV5 and supports the BWX and MAX extensions. ev6 21264 Schedules as an EV6 and supports the BWX, FIX, and MAX extensions. ev67 21264a Schedules as an EV6 and supports the BWX, CIX, FIX, and MAX extensions. Native toolchains also support the value native, which selects the best architecture option for the host processor. -mcpu=native has no effect if GCC does not recognize the processor. -mtune=cpu_type Set only the instruction scheduling parameters for machine type cpu_type. The instruction set is not changed. Native toolchains also support the value native, which selects the best architecture option for the host processor. -mtune=native has no effect if GCC does not recognize the processor. -mmemory-latency=time Sets the latency the scheduler should assume for typical memory references as seen by the application. This number is highly dependent on the memory access patterns used by the application and the size of the external cache on the machine. Valid options for time are number A decimal number representing clock cycles. L1 L2 L3 main The compiler contains estimates of the number of clock cycles for "typical" EV4 & EV5 hardware for the Level 1, 2 & 3 caches (also called Dcache, Scache, and Bcache), as well as to main memory. Note that L3 is only valid for EV5. FR30 Options These options are defined specifically for the FR30 port. -msmall-model Use the small address space model. This can produce smaller code, but it does assume that all symbolic values and addresses fit into a 20-bit range. -mno-lsim Assume that runtime support has been provided and so there is no need to include the simulator library (libsim.a) on the linker command line. FT32 Options These options are defined specifically for the FT32 port. -msim Specifies that the program will be run on the simulator. This causes an alternate runtime startup and library to be linked. You must not use this option when generating programs that will run on real hardware; you must provide your own runtime library for whatever I/O functions are needed. -mlra Enable Local Register Allocation. This is still experimental for FT32, so by default the compiler uses standard reload. -mnodiv Do not use div and mod instructions. -mft32b Enable use of the extended instructions of the FT32B processor. -mcompress Compress all code using the Ft32B code compression scheme. -mnopm Do not generate code that reads program memory. FRV Options -mgpr-32 Only use the first 32 general-purpose registers. -mgpr-64 Use all 64 general-purpose registers. -mfpr-32 Use only the first 32 floating-point registers. -mfpr-64 Use all 64 floating-point registers. -mhard-float Use hardware instructions for floating-point operations. -msoft-float Use library routines for floating-point operations. -malloc-cc Dynamically allocate condition code registers. -mfixed-cc Do not try to dynamically allocate condition code registers, only use "icc0" and "fcc0". -mdword Change ABI to use double word insns. -mno-dword Do not use double word instructions. -mdouble Use floating-point double instructions. -mno-double Do not use floating-point double instructions. -mmedia Use media instructions. -mno-media Do not use media instructions. -mmuladd Use multiply and add/subtract instructions. -mno-muladd Do not use multiply and add/subtract instructions. -mfdpic Select the FDPIC ABI, which uses function descriptors to represent pointers to functions. Without any PIC/PIE-related options, it implies -fPIE. With -fpic or -fpie, it assumes GOT entries and small data are within a 12-bit range from the GOT base address; with -fPIC or -fPIE, GOT offsets are computed with 32 bits. With a bfin-elf target, this option implies -msim. -minline-plt Enable inlining of PLT entries in function calls to functions that are not known to bind locally. It has no effect without -mfdpic. It's enabled by default if optimizing for speed and compiling for shared libraries (i.e., -fPIC or -fpic), or when an optimization option such as -O3 or above is present in the command line. -mTLS Assume a large TLS segment when generating thread-local code. -mtls Do not assume a large TLS segment when generating thread- local code. -mgprel-ro Enable the use of "GPREL" relocations in the FDPIC ABI for data that is known to be in read-only sections. It's enabled by default, except for -fpic or -fpie: even though it may help make the global offset table smaller, it trades 1 instruction for 4. With -fPIC or -fPIE, it trades 3 instructions for 4, one of which may be shared by multiple symbols, and it avoids the need for a GOT entry for the referenced symbol, so it's more likely to be a win. If it is not, -mno-gprel-ro can be used to disable it. -multilib-library-pic Link with the (library, not FD) pic libraries. It's implied by -mlibrary-pic, as well as by -fPIC and -fpic without -mfdpic. You should never have to use it explicitly. -mlinked-fp Follow the EABI requirement of always creating a frame pointer whenever a stack frame is allocated. This option is enabled by default and can be disabled with -mno-linked-fp. -mlong-calls Use indirect addressing to call functions outside the current compilation unit. This allows the functions to be placed anywhere within the 32-bit address space. -malign-labels Try to align labels to an 8-byte boundary by inserting NOPs into the previous packet. This option only has an effect when VLIW packing is enabled. It doesn't create new packets; it merely adds NOPs to existing ones. -mlibrary-pic Generate position-independent EABI code. -macc-4 Use only the first four media accumulator registers. -macc-8 Use all eight media accumulator registers. -mpack Pack VLIW instructions. -mno-pack Do not pack VLIW instructions. -mno-eflags Do not mark ABI switches in e_flags. -mcond-move Enable the use of conditional-move instructions (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-cond-move Disable the use of conditional-move instructions. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mscc Enable the use of conditional set instructions (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-scc Disable the use of conditional set instructions. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mcond-exec Enable the use of conditional execution (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-cond-exec Disable the use of conditional execution. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mvliw-branch Run a pass to pack branches into VLIW instructions (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-vliw-branch Do not run a pass to pack branches into VLIW instructions. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mmulti-cond-exec Enable optimization of "&&" and "||" in conditional execution (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-multi-cond-exec Disable optimization of "&&" and "||" in conditional execution. This switch is mainly for debugging the compiler and will likely be removed in a future version. -mnested-cond-exec Enable nested conditional execution optimizations (default). This switch is mainly for debugging the compiler and will likely be removed in a future version. -mno-nested-cond-exec Disable nested conditional execution optimizations. This switch is mainly for debugging the compiler and will likely be removed in a future version. -moptimize-membar This switch removes redundant "membar" instructions from the compiler-generated code. It is enabled by default. -mno-optimize-membar This switch disables the automatic removal of redundant "membar" instructions from the generated code. -mtomcat-stats Cause gas to print out tomcat statistics. -mcpu=cpu Select the processor type for which to generate code. Possible values are frv, fr550, tomcat, fr500, fr450, fr405, fr400, fr300 and simple. GNU/Linux Options These -m options are defined for GNU/Linux targets: -mglibc Use the GNU C library. This is the default except on *-*-linux-*uclibc*, *-*-linux-*musl* and *-*-linux-*android* targets. -muclibc Use uClibc C library. This is the default on *-*-linux-*uclibc* targets. -mmusl Use the musl C library. This is the default on *-*-linux-*musl* targets. -mbionic Use Bionic C library. This is the default on *-*-linux-*android* targets. -mandroid Compile code compatible with Android platform. This is the default on *-*-linux-*android* targets. When compiling, this option enables -mbionic, -fPIC, -fno-exceptions and -fno-rtti by default. When linking, this option makes the GCC driver pass Android-specific options to the linker. Finally, this option causes the preprocessor macro "__ANDROID__" to be defined. -tno-android-cc Disable compilation effects of -mandroid, i.e., do not enable -mbionic, -fPIC, -fno-exceptions and -fno-rtti by default. -tno-android-ld Disable linking effects of -mandroid, i.e., pass standard Linux linking options to the linker. H8/300 Options These -m options are defined for the H8/300 implementations: -mrelax Shorten some address references at link time, when possible; uses the linker option -relax. -mh Generate code for the H8/300H. -ms Generate code for the H8S. -mn Generate code for the H8S and H8/300H in the normal mode. This switch must be used either with -mh or -ms. -ms2600 Generate code for the H8S/2600. This switch must be used with -ms. -mexr Extended registers are stored on stack before execution of function with monitor attribute. Default option is -mexr. This option is valid only for H8S targets. -mno-exr Extended registers are not stored on stack before execution of function with monitor attribute. Default option is -mno-exr. This option is valid only for H8S targets. -mint32 Make "int" data 32 bits by default. -malign-300 On the H8/300H and H8S, use the same alignment rules as for the H8/300. The default for the H8/300H and H8S is to align longs and floats on 4-byte boundaries. -malign-300 causes them to be aligned on 2-byte boundaries. This option has no effect on the H8/300. HPPA Options These -m options are defined for the HPPA family of computers: -march=architecture-type Generate code for the specified architecture. The choices for architecture-type are 1.0 for PA 1.0, 1.1 for PA 1.1, and 2.0 for PA 2.0 processors. Refer to /usr/lib/sched.models on an HP-UX system to determine the proper architecture option for your machine. Code compiled for lower numbered architectures runs on higher numbered architectures, but not the other way around. -mpa-risc-1-0 -mpa-risc-1-1 -mpa-risc-2-0 Synonyms for -march=1.0, -march=1.1, and -march=2.0 respectively. -mcaller-copies The caller copies function arguments passed by hidden reference. This option should be used with care as it is not compatible with the default 32-bit runtime. However, only aggregates larger than eight bytes are passed by hidden reference and the option provides better compatibility with OpenMP. -mjump-in-delay This option is ignored and provided for compatibility purposes only. -mdisable-fpregs Prevent floating-point registers from being used in any manner. This is necessary for compiling kernels that perform lazy context switching of floating-point registers. If you use this option and attempt to perform floating-point operations, the compiler aborts. -mdisable-indexing Prevent the compiler from using indexing address modes. This avoids some rather obscure problems when compiling MIG generated code under MACH. -mno-space-regs Generate code that assumes the target has no space registers. This allows GCC to generate faster indirect calls and use unscaled index address modes. Such code is suitable for level 0 PA systems and kernels. -mfast-indirect-calls Generate code that assumes calls never cross space boundaries. This allows GCC to emit code that performs faster indirect calls. This option does not work in the presence of shared libraries or nested functions. -mfixed-range=register-range Generate code treating the given register range as fixed registers. A fixed register is one that the register allocator cannot use. This is useful when compiling kernel code. A register range is specified as two registers separated by a dash. Multiple register ranges can be specified separated by a comma. -mlong-load-store Generate 3-instruction load and store sequences as sometimes required by the HP-UX 10 linker. This is equivalent to the +k option to the HP compilers. -mportable-runtime Use the portable calling conventions proposed by HP for ELF systems. -mgas Enable the use of assembler directives only GAS understands. -mschedule=cpu-type Schedule code according to the constraints for the machine type cpu-type. The choices for cpu-type are 700 7100, 7100LC, 7200, 7300 and 8000. Refer to /usr/lib/sched.models on an HP-UX system to determine the proper scheduling option for your machine. The default scheduling is 8000. -mlinker-opt Enable the optimization pass in the HP-UX linker. Note this makes symbolic debugging impossible. It also triggers a bug in the HP-UX 8 and HP-UX 9 linkers in which they give bogus error messages when linking some programs. -msoft-float Generate output containing library calls for floating point. Warning: the requisite libraries are not available for all HPPA targets. Normally the facilities of the machine's usual C compiler are used, but this cannot be done directly in cross-compilation. You must make your own arrangements to provide suitable library functions for cross-compilation. -msoft-float changes the calling convention in the output file; therefore, it is only useful if you compile all of a program with this option. In particular, you need to compile libgcc.a, the library that comes with GCC, with -msoft-float in order for this to work. -msio Generate the predefine, "_SIO", for server IO. The default is -mwsio. This generates the predefines, "__hp9000s700", "__hp9000s700__" and "_WSIO", for workstation IO. These options are available under HP-UX and HI-UX. -mgnu-ld Use options specific to GNU ld. This passes -shared to ld when building a shared library. It is the default when GCC is configured, explicitly or implicitly, with the GNU linker. This option does not affect which ld is called; it only changes what parameters are passed to that ld. The ld that is called is determined by the --with-ld configure option, GCC's program search path, and finally by the user's PATH. The linker used by GCC can be printed using which `gcc -print-prog-name=ld`. This option is only available on the 64-bit HP-UX GCC, i.e. configured with hppa*64*-*-hpux*. -mhp-ld Use options specific to HP ld. This passes -b to ld when building a shared library and passes +Accept TypeMismatch to ld on all links. It is the default when GCC is configured, explicitly or implicitly, with the HP linker. This option does not affect which ld is called; it only changes what parameters are passed to that ld. The ld that is called is determined by the --with-ld configure option, GCC's program search path, and finally by the user's PATH. The linker used by GCC can be printed using which `gcc -print-prog-name=ld`. This option is only available on the 64-bit HP-UX GCC, i.e. configured with hppa*64*-*-hpux*. -mlong-calls Generate code that uses long call sequences. This ensures that a call is always able to reach linker generated stubs. The default is to generate long calls only when the distance from the call site to the beginning of the function or translation unit, as the case may be, exceeds a predefined limit set by the branch type being used. The limits for normal calls are 7,600,000 and 240,000 bytes, respectively for the PA 2.0 and PA 1.X architectures. Sibcalls are always limited at 240,000 bytes. Distances are measured from the beginning of functions when using the -ffunction-sections option, or when using the -mgas and -mno-portable-runtime options together under HP-UX with the SOM linker. It is normally not desirable to use this option as it degrades performance. However, it may be useful in large applications, particularly when partial linking is used to build the application. The types of long calls used depends on the capabilities of the assembler and linker, and the type of code being generated. The impact on systems that support long absolute calls, and long pic symbol-difference or pc-relative calls should be relatively small. However, an indirect call is used on 32-bit ELF systems in pic code and it is quite long. -munix=unix-std Generate compiler predefines and select a startfile for the specified UNIX standard. The choices for unix-std are 93, 95 and 98. 93 is supported on all HP-UX versions. 95 is available on HP-UX 10.10 and later. 98 is available on HP-UX 11.11 and later. The default values are 93 for HP-UX 10.00, 95 for HP-UX 10.10 though to 11.00, and 98 for HP-UX 11.11 and later. -munix=93 provides the same predefines as GCC 3.3 and 3.4. -munix=95 provides additional predefines for "XOPEN_UNIX" and "_XOPEN_SOURCE_EXTENDED", and the startfile unix95.o. -munix=98 provides additional predefines for "_XOPEN_UNIX", "_XOPEN_SOURCE_EXTENDED", "_INCLUDE__STDC_A1_SOURCE" and "_INCLUDE_XOPEN_SOURCE_500", and the startfile unix98.o. It is important to note that this option changes the interfaces for various library routines. It also affects the operational behavior of the C library. Thus, extreme care is needed in using this option. Library code that is intended to operate with more than one UNIX standard must test, set and restore the variable "__xpg4_extended_mask" as appropriate. Most GNU software doesn't provide this capability. -nolibdld Suppress the generation of link options to search libdld.sl when the -static option is specified on HP-UX 10 and later. -static The HP-UX implementation of setlocale in libc has a dependency on libdld.sl. There isn't an archive version of libdld.sl. Thus, when the -static option is specified, special link options are needed to resolve this dependency. On HP-UX 10 and later, the GCC driver adds the necessary options to link with libdld.sl when the -static option is specified. This causes the resulting binary to be dynamic. On the 64-bit port, the linkers generate dynamic binaries by default in any case. The -nolibdld option can be used to prevent the GCC driver from adding these link options. -threads Add support for multithreading with the dce thread library under HP-UX. This option sets flags for both the preprocessor and linker. IA-64 Options These are the -m options defined for the Intel IA-64 architecture. -mbig-endian Generate code for a big-endian target. This is the default for HP-UX. -mlittle-endian Generate code for a little-endian target. This is the default for AIX5 and GNU/Linux. -mgnu-as -mno-gnu-as Generate (or don't) code for the GNU assembler. This is the default. -mgnu-ld -mno-gnu-ld Generate (or don't) code for the GNU linker. This is the default. -mno-pic Generate code that does not use a global pointer register. The result is not position independent code, and violates the IA-64 ABI. -mvolatile-asm-stop -mno-volatile-asm-stop Generate (or don't) a stop bit immediately before and after volatile asm statements. -mregister-names -mno-register-names Generate (or don't) in, loc, and out register names for the stacked registers. This may make assembler output more readable. -mno-sdata -msdata Disable (or enable) optimizations that use the small data section. This may be useful for working around optimizer bugs. -mconstant-gp Generate code that uses a single constant global pointer value. This is useful when compiling kernel code. -mauto-pic Generate code that is self-relocatable. This implies -mconstant-gp. This is useful when compiling firmware code. -minline-float-divide-min-latency Generate code for inline divides of floating-point values using the minimum latency algorithm. -minline-float-divide-max-throughput Generate code for inline divides of floating-point values using the maximum throughput algorithm. -mno-inline-float-divide Do not generate inline code for divides of floating-point values. -minline-int-divide-min-latency Generate code for inline divides of integer values using the minimum latency algorithm. -minline-int-divide-max-throughput Generate code for inline divides of integer values using the maximum throughput algorithm. -mno-inline-int-divide Do not generate inline code for divides of integer values. -minline-sqrt-min-latency Generate code for inline square roots using the minimum latency algorithm. -minline-sqrt-max-throughput Generate code for inline square roots using the maximum throughput algorithm. -mno-inline-sqrt Do not generate inline code for "sqrt". -mfused-madd -mno-fused-madd Do (don't) generate code that uses the fused multiply/add or multiply/subtract instructions. The default is to use these instructions. -mno-dwarf2-asm -mdwarf2-asm Don't (or do) generate assembler code for the DWARF line number debugging info. This may be useful when not using the GNU assembler. -mearly-stop-bits -mno-early-stop-bits Allow stop bits to be placed earlier than immediately preceding the instruction that triggered the stop bit. This can improve instruction scheduling, but does not always do so. -mfixed-range=register-range Generate code treating the given register range as fixed registers. A fixed register is one that the register allocator cannot use. This is useful when compiling kernel code. A register range is specified as two registers separated by a dash. Multiple register ranges can be specified separated by a comma. -mtls-size=tls-size Specify bit size of immediate TLS offsets. Valid values are 14, 22, and 64. -mtune=cpu-type Tune the instruction scheduling for a particular CPU, Valid values are itanium, itanium1, merced, itanium2, and mckinley. -milp32 -mlp64 Generate code for a 32-bit or 64-bit environment. The 32-bit environment sets int, long and pointer to 32 bits. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits. These are HP-UX specific flags. -mno-sched-br-data-spec -msched-br-data-spec (Dis/En)able data speculative scheduling before reload. This results in generation of "ld.a" instructions and the corresponding check instructions ("ld.c" / "chk.a"). The default setting is disabled. -msched-ar-data-spec -mno-sched-ar-data-spec (En/Dis)able data speculative scheduling after reload. This results in generation of "ld.a" instructions and the corresponding check instructions ("ld.c" / "chk.a"). The default setting is enabled. -mno-sched-control-spec -msched-control-spec (Dis/En)able control speculative scheduling. This feature is available only during region scheduling (i.e. before reload). This results in generation of the "ld.s" instructions and the corresponding check instructions "chk.s". The default setting is disabled. -msched-br-in-data-spec -mno-sched-br-in-data-spec (En/Dis)able speculative scheduling of the instructions that are dependent on the data speculative loads before reload. This is effective only with -msched-br-data-spec enabled. The default setting is enabled. -msched-ar-in-data-spec -mno-sched-ar-in-data-spec (En/Dis)able speculative scheduling of the instructions that are dependent on the data speculative loads after reload. This is effective only with -msched-ar-data-spec enabled. The default setting is enabled. -msched-in-control-spec -mno-sched-in-control-spec (En/Dis)able speculative scheduling of the instructions that are dependent on the control speculative loads. This is effective only with -msched-control-spec enabled. The default setting is enabled. -mno-sched-prefer-non-data-spec-insns -msched-prefer-non-data-spec-insns If enabled, data-speculative instructions are chosen for schedule only if there are no other choices at the moment. This makes the use of the data speculation much more conservative. The default setting is disabled. -mno-sched-prefer-non-control-spec-insns -msched-prefer-non-control-spec-insns If enabled, control-speculative instructions are chosen for schedule only if there are no other choices at the moment. This makes the use of the control speculation much more conservative. The default setting is disabled. -mno-sched-count-spec-in-critical-path -msched-count-spec-in-critical-path If enabled, speculative dependencies are considered during computation of the instructions priorities. This makes the use of the speculation a bit more conservative. The default setting is disabled. -msched-spec-ldc Use a simple data speculation check. This option is on by default. -msched-control-spec-ldc Use a simple check for control speculation. This option is on by default. -msched-stop-bits-after-every-cycle Place a stop bit after every cycle when scheduling. This option is on by default. -msched-fp-mem-deps-zero-cost Assume that floating-point stores and loads are not likely to cause a conflict when placed into the same instruction group. This option is disabled by default. -msel-sched-dont-check-control-spec Generate checks for control speculation in selective scheduling. This flag is disabled by default. -msched-max-memory-insns=max-insns Limit on the number of memory insns per instruction group, giving lower priority to subsequent memory insns attempting to schedule in the same instruction group. Frequently useful to prevent cache bank conflicts. The default value is 1. -msched-max-memory-insns-hard-limit Makes the limit specified by msched-max-memory-insns a hard limit, disallowing more than that number in an instruction group. Otherwise, the limit is "soft", meaning that non- memory operations are preferred when the limit is reached, but memory operations may still be scheduled. LM32 Options These -m options are defined for the LatticeMico32 architecture: -mbarrel-shift-enabled Enable barrel-shift instructions. -mdivide-enabled Enable divide and modulus instructions. -mmultiply-enabled Enable multiply instructions. -msign-extend-enabled Enable sign extend instructions. -muser-enabled Enable user-defined instructions. M32C Options -mcpu=name Select the CPU for which code is generated. name may be one of r8c for the R8C/Tiny series, m16c for the M16C (up to /60) series, m32cm for the M16C/80 series, or m32c for the M32C/80 series. -msim Specifies that the program will be run on the simulator. This causes an alternate runtime library to be linked in which supports, for example, file I/O. You must not use this option when generating programs that will run on real hardware; you must provide your own runtime library for whatever I/O functions are needed. -memregs=number Specifies the number of memory-based pseudo-registers GCC uses during code generation. These pseudo-registers are used like real registers, so there is a tradeoff between GCC's ability to fit the code into available registers, and the performance penalty of using memory instead of registers. Note that all modules in a program must be compiled with the same value for this option. Because of that, you must not use this option with GCC's default runtime libraries. M32R/D Options These -m options are defined for Renesas M32R/D architectures: -m32r2 Generate code for the M32R/2. -m32rx Generate code for the M32R/X. -m32r Generate code for the M32R. This is the default. -mmodel=small Assume all objects live in the lower 16MB of memory (so that their addresses can be loaded with the "ld24" instruction), and assume all subroutines are reachable with the "bl" instruction. This is the default. The addressability of a particular object can be set with the "model" attribute. -mmodel=medium Assume objects may be anywhere in the 32-bit address space (the compiler generates "seth/add3" instructions to load their addresses), and assume all subroutines are reachable with the "bl" instruction. -mmodel=large Assume objects may be anywhere in the 32-bit address space (the compiler generates "seth/add3" instructions to load their addresses), and assume subroutines may not be reachable with the "bl" instruction (the compiler generates the much slower "seth/add3/jl" instruction sequence). -msdata=none Disable use of the small data area. Variables are put into one of ".data", ".bss", or ".rodata" (unless the "section" attribute has been specified). This is the default. The small data area consists of sections ".sdata" and ".sbss". Objects may be explicitly put in the small data area with the "section" attribute using one of these sections. -msdata=sdata Put small global and static data in the small data area, but do not generate special code to reference them. -msdata=use Put small global and static data in the small data area, and generate special instructions to reference them. -G num Put global and static objects less than or equal to num bytes into the small data or BSS sections instead of the normal data or BSS sections. The default value of num is 8. The -msdata option must be set to one of sdata or use for this option to have any effect. All modules should be compiled with the same -G num value. Compiling with different values of num may or may not work; if it doesn't the linker gives an error message---incorrect code is not generated. -mdebug Makes the M32R-specific code in the compiler display some statistics that might help in debugging programs. -malign-loops Align all loops to a 32-byte boundary. -mno-align-loops Do not enforce a 32-byte alignment for loops. This is the default. -missue-rate=number Issue number instructions per cycle. number can only be 1 or 2. -mbranch-cost=number number can only be 1 or 2. If it is 1 then branches are preferred over conditional code, if it is 2, then the opposite applies. -mflush-trap=number Specifies the trap number to use to flush the cache. The default is 12. Valid numbers are between 0 and 15 inclusive. -mno-flush-trap Specifies that the cache cannot be flushed by using a trap. -mflush-func=name Specifies the name of the operating system function to call to flush the cache. The default is _flush_cache, but a function call is only used if a trap is not available. -mno-flush-func Indicates that there is no OS function for flushing the cache. M680x0 Options These are the -m options defined for M680x0 and ColdFire processors. The default settings depend on which architecture was selected when the compiler was configured; the defaults for the most common choices are given below. -march=arch Generate code for a specific M680x0 or ColdFire instruction set architecture. Permissible values of arch for M680x0 architectures are: 68000, 68010, 68020, 68030, 68040, 68060 and cpu32. ColdFire architectures are selected according to Freescale's ISA classification and the permissible values are: isaa, isaaplus, isab and isac. GCC defines a macro "__mcfarch__" whenever it is generating code for a ColdFire target. The arch in this macro is one of the -march arguments given above. When used together, -march and -mtune select code that runs on a family of similar processors but that is optimized for a particular microarchitecture. -mcpu=cpu Generate code for a specific M680x0 or ColdFire processor. The M680x0 cpus are: 68000, 68010, 68020, 68030, 68040, 68060, 68302, 68332 and cpu32. The ColdFire cpus are given by the table below, which also classifies the CPUs into families: Family : -mcpu arguments 51 : 51 51ac 51ag 51cn 51em 51je 51jf 51jg 51jm 51mm 51qe 51qm 5206 : 5202 5204 5206 5206e : 5206e 5208 : 5207 5208 5211a : 5210a 5211a 5213 : 5211 5212 5213 5216 : 5214 5216 52235 : 52230 52231 52232 52233 52234 52235 5225 : 5224 5225 52259 : 52252 52254 52255 52256 52258 52259 5235 : 5232 5233 5234 5235 523x 5249 : 5249 5250 : 5250 5271 : 5270 5271 5272 : 5272 5275 : 5274 5275 5282 : 5280 5281 5282 528x 53017 : 53011 53012 53013 53014 53015 53016 53017 5307 : 5307 5329 : 5327 5328 5329 532x 5373 : 5372 5373 537x 5407 : 5407 5475 : 5470 5471 5472 5473 5474 5475 547x 5480 5481 5482 5483 5484 5485 -mcpu=cpu overrides -march=arch if arch is compatible with cpu. Other combinations of -mcpu and -march are rejected. GCC defines the macro "__mcf_cpu_cpu" when ColdFire target cpu is selected. It also defines "__mcf_family_family", where the value of family is given by the table above. -mtune=tune Tune the code for a particular microarchitecture within the constraints set by -march and -mcpu. The M680x0 microarchitectures are: 68000, 68010, 68020, 68030, 68040, 68060 and cpu32. The ColdFire microarchitectures are: cfv1, cfv2, cfv3, cfv4 and cfv4e. You can also use -mtune=68020-40 for code that needs to run relatively well on 68020, 68030 and 68040 targets. -mtune=68020-60 is similar but includes 68060 targets as well. These two options select the same tuning decisions as -m68020-40 and -m68020-60 respectively. GCC defines the macros "__mcarch" and "__mcarch__" when tuning for 680x0 architecture arch. It also defines "mcarch" unless either -ansi or a non-GNU -std option is used. If GCC is tuning for a range of architectures, as selected by -mtune=68020-40 or -mtune=68020-60, it defines the macros for every architecture in the range. GCC also defines the macro "__muarch__" when tuning for ColdFire microarchitecture uarch, where uarch is one of the arguments given above. -m68000 -mc68000 Generate output for a 68000. This is the default when the compiler is configured for 68000-based systems. It is equivalent to -march=68000. Use this option for microcontrollers with a 68000 or EC000 core, including the 68008, 68302, 68306, 68307, 68322, 68328 and 68356. -m68010 Generate output for a 68010. This is the default when the compiler is configured for 68010-based systems. It is equivalent to -march=68010. -m68020 -mc68020 Generate output for a 68020. This is the default when the compiler is configured for 68020-based systems. It is equivalent to -march=68020. -m68030 Generate output for a 68030. This is the default when the compiler is configured for 68030-based systems. It is equivalent to -march=68030. -m68040 Generate output for a 68040. This is the default when the compiler is configured for 68040-based systems. It is equivalent to -march=68040. This option inhibits the use of 68881/68882 instructions that have to be emulated by software on the 68040. Use this option if your 68040 does not have code to emulate those instructions. -m68060 Generate output for a 68060. This is the default when the compiler is configured for 68060-based systems. It is equivalent to -march=68060. This option inhibits the use of 68020 and 68881/68882 instructions that have to be emulated by software on the 68060. Use this option if your 68060 does not have code to emulate those instructions. -mcpu32 Generate output for a CPU32. This is the default when the compiler is configured for CPU32-based systems. It is equivalent to -march=cpu32. Use this option for microcontrollers with a CPU32 or CPU32+ core, including the 68330, 68331, 68332, 68333, 68334, 68336, 68340, 68341, 68349 and 68360. -m5200 Generate output for a 520X ColdFire CPU. This is the default when the compiler is configured for 520X-based systems. It is equivalent to -mcpu=5206, and is now deprecated in favor of that option. Use this option for microcontroller with a 5200 core, including the MCF5202, MCF5203, MCF5204 and MCF5206. -m5206e Generate output for a 5206e ColdFire CPU. The option is now deprecated in favor of the equivalent -mcpu=5206e. -m528x Generate output for a member of the ColdFire 528X family. The option is now deprecated in favor of the equivalent -mcpu=528x. -m5307 Generate output for a ColdFire 5307 CPU. The option is now deprecated in favor of the equivalent -mcpu=5307. -m5407 Generate output for a ColdFire 5407 CPU. The option is now deprecated in favor of the equivalent -mcpu=5407. -mcfv4e Generate output for a ColdFire V4e family CPU (e.g. 547x/548x). This includes use of hardware floating-point instructions. The option is equivalent to -mcpu=547x, and is now deprecated in favor of that option. -m68020-40 Generate output for a 68040, without using any of the new instructions. This results in code that can run relatively efficiently on either a 68020/68881 or a 68030 or a 68040. The generated code does use the 68881 instructions that are emulated on the 68040. The option is equivalent to -march=68020 -mtune=68020-40. -m68020-60 Generate output for a 68060, without using any of the new instructions. This results in code that can run relatively efficiently on either a 68020/68881 or a 68030 or a 68040. The generated code does use the 68881 instructions that are emulated on the 68060. The option is equivalent to -march=68020 -mtune=68020-60. -mhard-float -m68881 Generate floating-point instructions. This is the default for 68020 and above, and for ColdFire devices that have an FPU. It defines the macro "__HAVE_68881__" on M680x0 targets and "__mcffpu__" on ColdFire targets. -msoft-float Do not generate floating-point instructions; use library calls instead. This is the default for 68000, 68010, and 68832 targets. It is also the default for ColdFire devices that have no FPU. -mdiv -mno-div Generate (do not generate) ColdFire hardware divide and remainder instructions. If -march is used without -mcpu, the default is "on" for ColdFire architectures and "off" for M680x0 architectures. Otherwise, the default is taken from the target CPU (either the default CPU, or the one specified by -mcpu). For example, the default is "off" for -mcpu=5206 and "on" for -mcpu=5206e. GCC defines the macro "__mcfhwdiv__" when this option is enabled. -mshort Consider type "int" to be 16 bits wide, like "short int". Additionally, parameters passed on the stack are also aligned to a 16-bit boundary even on targets whose API mandates promotion to 32-bit. -mno-short Do not consider type "int" to be 16 bits wide. This is the default. -mnobitfield -mno-bitfield Do not use the bit-field instructions. The -m68000, -mcpu32 and -m5200 options imply -mnobitfield. -mbitfield Do use the bit-field instructions. The -m68020 option implies -mbitfield. This is the default if you use a configuration designed for a 68020. -mrtd Use a different function-calling convention, in which functions that take a fixed number of arguments return with the "rtd" instruction, which pops their arguments while returning. This saves one instruction in the caller since there is no need to pop the arguments there. This calling convention is incompatible with the one normally used on Unix, so you cannot use it if you need to call libraries compiled with the Unix compiler. Also, you must provide function prototypes for all functions that take variable numbers of arguments (including "printf"); otherwise incorrect code is generated for calls to those functions. In addition, seriously incorrect code results if you call a function with too many arguments. (Normally, extra arguments are harmlessly ignored.) The "rtd" instruction is supported by the 68010, 68020, 68030, 68040, 68060 and CPU32 processors, but not by the 68000 or 5200. The default is -mno-rtd. -malign-int -mno-align-int Control whether GCC aligns "int", "long", "long long", "float", "double", and "long double" variables on a 32-bit boundary (-malign-int) or a 16-bit boundary (-mno-align-int). Aligning variables on 32-bit boundaries produces code that runs somewhat faster on processors with 32-bit busses at the expense of more memory. Warning: if you use the -malign-int switch, GCC aligns structures containing the above types differently than most published application binary interface specifications for the m68k. -mpcrel Use the pc-relative addressing mode of the 68000 directly, instead of using a global offset table. At present, this option implies -fpic, allowing at most a 16-bit offset for pc-relative addressing. -fPIC is not presently supported with -mpcrel, though this could be supported for 68020 and higher processors. -mno-strict-align -mstrict-align Do not (do) assume that unaligned memory references are handled by the system. -msep-data Generate code that allows the data segment to be located in a different area of memory from the text segment. This allows for execute-in-place in an environment without virtual memory management. This option implies -fPIC. -mno-sep-data Generate code that assumes that the data segment follows the text segment. This is the default. -mid-shared-library Generate code that supports shared libraries via the library ID method. This allows for execute-in-place and shared libraries in an environment without virtual memory management. This option implies -fPIC. -mno-id-shared-library Generate code that doesn't assume ID-based shared libraries are being used. This is the default. -mshared-library-id=n Specifies the identification number of the ID-based shared library being compiled. Specifying a value of 0 generates more compact code; specifying other values forces the allocation of that number to the current library, but is no more space- or time-efficient than omitting this option. -mxgot -mno-xgot When generating position-independent code for ColdFire, generate code that works if the GOT has more than 8192 entries. This code is larger and slower than code generated without this option. On M680x0 processors, this option is not needed; -fPIC suffices. GCC normally uses a single instruction to load values from the GOT. While this is relatively efficient, it only works if the GOT is smaller than about 64k. Anything larger causes the linker to report an error such as: relocation truncated to fit: R_68K_GOT16O foobar If this happens, you should recompile your code with -mxgot. It should then work with very large GOTs. However, code generated with -mxgot is less efficient, since it takes 4 instructions to fetch the value of a global symbol. Note that some linkers, including newer versions of the GNU linker, can create multiple GOTs and sort GOT entries. If you have such a linker, you should only need to use -mxgot when compiling a single object file that accesses more than 8192 GOT entries. Very few do. These options have no effect unless GCC is generating position-independent code. -mlong-jump-table-offsets Use 32-bit offsets in "switch" tables. The default is to use 16-bit offsets. MCore Options These are the -m options defined for the Motorola M*Core processors. -mhardlit -mno-hardlit Inline constants into the code stream if it can be done in two instructions or less. -mdiv -mno-div Use the divide instruction. (Enabled by default). -mrelax-immediate -mno-relax-immediate Allow arbitrary-sized immediates in bit operations. -mwide-bitfields -mno-wide-bitfields Always treat bit-fields as "int"-sized. -m4byte-functions -mno-4byte-functions Force all functions to be aligned to a 4-byte boundary. -mcallgraph-data -mno-callgraph-data Emit callgraph information. -mslow-bytes -mno-slow-bytes Prefer word access when reading byte quantities. -mlittle-endian -mbig-endian Generate code for a little-endian target. -m210 -m340 Generate code for the 210 processor. -mno-lsim Assume that runtime support has been provided and so omit the simulator library (libsim.a) from the linker command line. -mstack-increment=size Set the maximum amount for a single stack increment operation. Large values can increase the speed of programs that contain functions that need a large amount of stack space, but they can also trigger a segmentation fault if the stack is extended too much. The default value is 0x1000. MeP Options -mabsdiff Enables the "abs" instruction, which is the absolute difference between two registers. -mall-opts Enables all the optional instructions---average, multiply, divide, bit operations, leading zero, absolute difference, min/max, clip, and saturation. -maverage Enables the "ave" instruction, which computes the average of two registers. -mbased=n Variables of size n bytes or smaller are placed in the ".based" section by default. Based variables use the $tp register as a base register, and there is a 128-byte limit to the ".based" section. -mbitops Enables the bit operation instructions---bit test ("btstm"), set ("bsetm"), clear ("bclrm"), invert ("bnotm"), and test- and-set ("tas"). -mc=name Selects which section constant data is placed in. name may be tiny, near, or far. -mclip Enables the "clip" instruction. Note that -mclip is not useful unless you also provide -mminmax. -mconfig=name Selects one of the built-in core configurations. Each MeP chip has one or more modules in it; each module has a core CPU and a variety of coprocessors, optional instructions, and peripherals. The "MeP-Integrator" tool, not part of GCC, provides these configurations through this option; using this option is the same as using all the corresponding command- line options. The default configuration is default. -mcop Enables the coprocessor instructions. By default, this is a 32-bit coprocessor. Note that the coprocessor is normally enabled via the -mconfig= option. -mcop32 Enables the 32-bit coprocessor's instructions. -mcop64 Enables the 64-bit coprocessor's instructions. -mivc2 Enables IVC2 scheduling. IVC2 is a 64-bit VLIW coprocessor. -mdc Causes constant variables to be placed in the ".near" section. -mdiv Enables the "div" and "divu" instructions. -meb Generate big-endian code. -mel Generate little-endian code. -mio-volatile Tells the compiler that any variable marked with the "io" attribute is to be considered volatile. -ml Causes variables to be assigned to the ".far" section by default. -mleadz Enables the "leadz" (leading zero) instruction. -mm Causes variables to be assigned to the ".near" section by default. -mminmax Enables the "min" and "max" instructions. -mmult Enables the multiplication and multiply-accumulate instructions. -mno-opts Disables all the optional instructions enabled by -mall-opts. -mrepeat Enables the "repeat" and "erepeat" instructions, used for low-overhead looping. -ms Causes all variables to default to the ".tiny" section. Note that there is a 65536-byte limit to this section. Accesses to these variables use the %gp base register. -msatur Enables the saturation instructions. Note that the compiler does not currently generate these itself, but this option is included for compatibility with other tools, like "as". -msdram Link the SDRAM-based runtime instead of the default ROM-based runtime. -msim Link the simulator run-time libraries. -msimnovec Link the simulator runtime libraries, excluding built-in support for reset and exception vectors and tables. -mtf Causes all functions to default to the ".far" section. Without this option, functions default to the ".near" section. -mtiny=n Variables that are n bytes or smaller are allocated to the ".tiny" section. These variables use the $gp base register. The default for this option is 4, but note that there's a 65536-byte limit to the ".tiny" section. MicroBlaze Options -msoft-float Use software emulation for floating point (default). -mhard-float Use hardware floating-point instructions. -mmemcpy Do not optimize block moves, use "memcpy". -mno-clearbss This option is deprecated. Use -fno-zero-initialized-in-bss instead. -mcpu=cpu-type Use features of, and schedule code for, the given CPU. Supported values are in the format vX.YY.Z, where X is a major version, YY is the minor version, and Z is compatibility code. Example values are v3.00.a, v4.00.b, v5.00.a, v5.00.b, v6.00.a. -mxl-soft-mul Use software multiply emulation (default). -mxl-soft-div Use software emulation for divides (default). -mxl-barrel-shift Use the hardware barrel shifter. -mxl-pattern-compare Use pattern compare instructions. -msmall-divides Use table lookup optimization for small signed integer divisions. -mxl-stack-check This option is deprecated. Use -fstack-check instead. -mxl-gp-opt Use GP-relative ".sdata"/".sbss" sections. -mxl-multiply-high Use multiply high instructions for high part of 32x32 multiply. -mxl-float-convert Use hardware floating-point conversion instructions. -mxl-float-sqrt Use hardware floating-point square root instruction. -mbig-endian Generate code for a big-endian target. -mlittle-endian Generate code for a little-endian target. -mxl-reorder Use reorder instructions (swap and byte reversed load/store). -mxl-mode-app-model Select application model app-model. Valid models are executable normal executable (default), uses startup code crt0.o. -mpic-data-is-text-relative Assume that the displacement between the text and data segments is fixed at static link time. This allows data to be referenced by offset from start of text address instead of GOT since PC-relative addressing is not supported. xmdstub for use with Xilinx Microprocessor Debugger (XMD) based software intrusive debug agent called xmdstub. This uses startup file crt1.o and sets the start address of the program to 0x800. bootstrap for applications that are loaded using a bootloader. This model uses startup file crt2.o which does not contain a processor reset vector handler. This is suitable for transferring control on a processor reset to the bootloader rather than the application. novectors for applications that do not require any of the MicroBlaze vectors. This option may be useful for applications running within a monitoring application. This model uses crt3.o as a startup file. Option -xl-mode-app-model is a deprecated alias for -mxl-mode-app-model. MIPS Options -EB Generate big-endian code. -EL Generate little-endian code. This is the default for mips*el-*-* configurations. -march=arch Generate code that runs on arch, which can be the name of a generic MIPS ISA, or the name of a particular processor. The ISA names are: mips1, mips2, mips3, mips4, mips32, mips32r2, mips32r3, mips32r5, mips32r6, mips64, mips64r2, mips64r3, mips64r5 and mips64r6. The processor names are: 4kc, 4km, 4kp, 4ksc, 4kec, 4kem, 4kep, 4ksd, 5kc, 5kf, 20kc, 24kc, 24kf2_1, 24kf1_1, 24kec, 24kef2_1, 24kef1_1, 34kc, 34kf2_1, 34kf1_1, 34kn, 74kc, 74kf2_1, 74kf1_1, 74kf3_2, 1004kc, 1004kf2_1, 1004kf1_1, i6400, i6500, interaptiv, loongson2e, loongson2f, loongson3a, gs464, gs464e, gs264e, m4k, m14k, m14kc, m14ke, m14kec, m5100, m5101, octeon, octeon+, octeon2, octeon3, orion, p5600, p6600, r2000, r3000, r3900, r4000, r4400, r4600, r4650, r4700, r5900, r6000, r8000, rm7000, rm9000, r10000, r12000, r14000, r16000, sb1, sr71000, vr4100, vr4111, vr4120, vr4130, vr4300, vr5000, vr5400, vr5500, xlr and xlp. The special value from-abi selects the most compatible architecture for the selected ABI (that is, mips1 for 32-bit ABIs and mips3 for 64-bit ABIs). The native Linux/GNU toolchain also supports the value native, which selects the best architecture option for the host processor. -march=native has no effect if GCC does not recognize the processor. In processor names, a final 000 can be abbreviated as k (for example, -march=r2k). Prefixes are optional, and vr may be written r. Names of the form nf2_1 refer to processors with FPUs clocked at half the rate of the core, names of the form nf1_1 refer to processors with FPUs clocked at the same rate as the core, and names of the form nf3_2 refer to processors with FPUs clocked a ratio of 3:2 with respect to the core. For compatibility reasons, nf is accepted as a synonym for nf2_1 while nx and bfx are accepted as synonyms for nf1_1. GCC defines two macros based on the value of this option. The first is "_MIPS_ARCH", which gives the name of target architecture, as a string. The second has the form "_MIPS_ARCH_foo", where foo is the capitalized value of "_MIPS_ARCH". For example, -march=r2000 sets "_MIPS_ARCH" to "r2000" and defines the macro "_MIPS_ARCH_R2000". Note that the "_MIPS_ARCH" macro uses the processor names given above. In other words, it has the full prefix and does not abbreviate 000 as k. In the case of from-abi, the macro names the resolved architecture (either "mips1" or "mips3"). It names the default architecture when no -march option is given. -mtune=arch Optimize for arch. Among other things, this option controls the way instructions are scheduled, and the perceived cost of arithmetic operations. The list of arch values is the same as for -march. When this option is not used, GCC optimizes for the processor specified by -march. By using -march and -mtune together, it is possible to generate code that runs on a family of processors, but optimize the code for one particular member of that family. -mtune defines the macros "_MIPS_TUNE" and "_MIPS_TUNE_foo", which work in the same way as the -march ones described above. -mips1 Equivalent to -march=mips1. -mips2 Equivalent to -march=mips2. -mips3 Equivalent to -march=mips3. -mips4 Equivalent to -march=mips4. -mips32 Equivalent to -march=mips32. -mips32r3 Equivalent to -march=mips32r3. -mips32r5 Equivalent to -march=mips32r5. -mips32r6 Equivalent to -march=mips32r6. -mips64 Equivalent to -march=mips64. -mips64r2 Equivalent to -march=mips64r2. -mips64r3 Equivalent to -march=mips64r3. -mips64r5 Equivalent to -march=mips64r5. -mips64r6 Equivalent to -march=mips64r6. -mips16 -mno-mips16 Generate (do not generate) MIPS16 code. If GCC is targeting a MIPS32 or MIPS64 architecture, it makes use of the MIPS16e ASE. MIPS16 code generation can also be controlled on a per- function basis by means of "mips16" and "nomips16" attributes. -mflip-mips16 Generate MIPS16 code on alternating functions. This option is provided for regression testing of mixed MIPS16/non-MIPS16 code generation, and is not intended for ordinary use in compiling user code. -minterlink-compressed -mno-interlink-compressed Require (do not require) that code using the standard (uncompressed) MIPS ISA be link-compatible with MIPS16 and microMIPS code, and vice versa. For example, code using the standard ISA encoding cannot jump directly to MIPS16 or microMIPS code; it must either use a call or an indirect jump. -minterlink-compressed therefore disables direct jumps unless GCC knows that the target of the jump is not compressed. -minterlink-mips16 -mno-interlink-mips16 Aliases of -minterlink-compressed and -mno-interlink-compressed. These options predate the microMIPS ASE and are retained for backwards compatibility. -mabi=32 -mabi=o64 -mabi=n32 -mabi=64 -mabi=eabi Generate code for the given ABI. Note that the EABI has a 32-bit and a 64-bit variant. GCC normally generates 64-bit code when you select a 64-bit architecture, but you can use -mgp32 to get 32-bit code instead. For information about the O64 ABI, see <http://gcc.gnu.org/projects/mipso64-abi.html >. GCC supports a variant of the o32 ABI in which floating-point registers are 64 rather than 32 bits wide. You can select this combination with -mabi=32 -mfp64. This ABI relies on the "mthc1" and "mfhc1" instructions and is therefore only supported for MIPS32R2, MIPS32R3 and MIPS32R5 processors. The register assignments for arguments and return values remain the same, but each scalar value is passed in a single 64-bit register rather than a pair of 32-bit registers. For example, scalar floating-point values are returned in $f0 only, not a $f0/$f1 pair. The set of call-saved registers also remains the same in that the even-numbered double- precision registers are saved. Two additional variants of the o32 ABI are supported to enable a transition from 32-bit to 64-bit registers. These are FPXX (-mfpxx) and FP64A (-mfp64 -mno-odd-spreg). The FPXX extension mandates that all code must execute correctly when run using 32-bit or 64-bit registers. The code can be interlinked with either FP32 or FP64, but not both. The FP64A extension is similar to the FP64 extension but forbids the use of odd-numbered single-precision registers. This can be used in conjunction with the "FRE" mode of FPUs in MIPS32R5 processors and allows both FP32 and FP64A code to interlink and run in the same process without changing FPU modes. -mabicalls -mno-abicalls Generate (do not generate) code that is suitable for SVR4-style dynamic objects. -mabicalls is the default for SVR4-based systems. -mshared -mno-shared Generate (do not generate) code that is fully position- independent, and that can therefore be linked into shared libraries. This option only affects -mabicalls. All -mabicalls code has traditionally been position- independent, regardless of options like -fPIC and -fpic. However, as an extension, the GNU toolchain allows executables to use absolute accesses for locally-binding symbols. It can also use shorter GP initialization sequences and generate direct calls to locally-defined functions. This mode is selected by -mno-shared. -mno-shared depends on binutils 2.16 or higher and generates objects that can only be linked by the GNU linker. However, the option does not affect the ABI of the final executable; it only affects the ABI of relocatable objects. Using -mno-shared generally makes executables both smaller and quicker. -mshared is the default. -mplt -mno-plt Assume (do not assume) that the static and dynamic linkers support PLTs and copy relocations. This option only affects -mno-shared -mabicalls. For the n64 ABI, this option has no effect without -msym32. You can make -mplt the default by configuring GCC with --with-mips-plt. The default is -mno-plt otherwise. -mxgot -mno-xgot Lift (do not lift) the usual restrictions on the size of the global offset table. GCC normally uses a single instruction to load values from the GOT. While this is relatively efficient, it only works if the GOT is smaller than about 64k. Anything larger causes the linker to report an error such as: relocation truncated to fit: R_MIPS_GOT16 foobar If this happens, you should recompile your code with -mxgot. This works with very large GOTs, although the code is also less efficient, since it takes three instructions to fetch the value of a global symbol. Note that some linkers can create multiple GOTs. If you have such a linker, you should only need to use -mxgot when a single object file accesses more than 64k's worth of GOT entries. Very few do. These options have no effect unless GCC is generating position independent code. -mgp32 Assume that general-purpose registers are 32 bits wide. -mgp64 Assume that general-purpose registers are 64 bits wide. -mfp32 Assume that floating-point registers are 32 bits wide. -mfp64 Assume that floating-point registers are 64 bits wide. -mfpxx Do not assume the width of floating-point registers. -mhard-float Use floating-point coprocessor instructions. -msoft-float Do not use floating-point coprocessor instructions. Implement floating-point calculations using library calls instead. -mno-float Equivalent to -msoft-float, but additionally asserts that the program being compiled does not perform any floating-point operations. This option is presently supported only by some bare-metal MIPS configurations, where it may select a special set of libraries that lack all floating-point support (including, for example, the floating-point "printf" formats). If code compiled with -mno-float accidentally contains floating-point operations, it is likely to suffer a link-time or run-time failure. -msingle-float Assume that the floating-point coprocessor only supports single-precision operations. -mdouble-float Assume that the floating-point coprocessor supports double- precision operations. This is the default. -modd-spreg -mno-odd-spreg Enable the use of odd-numbered single-precision floating- point registers for the o32 ABI. This is the default for processors that are known to support these registers. When using the o32 FPXX ABI, -mno-odd-spreg is set by default. -mabs=2008 -mabs=legacy These options control the treatment of the special not-a- number (NaN) IEEE 754 floating-point data with the "abs.fmt" and "neg.fmt" machine instructions. By default or when -mabs=legacy is used the legacy treatment is selected. In this case these instructions are considered arithmetic and avoided where correct operation is required and the input operand might be a NaN. A longer sequence of instructions that manipulate the sign bit of floating-point datum manually is used instead unless the -ffinite-math-only option has also been specified. The -mabs=2008 option selects the IEEE 754-2008 treatment. In this case these instructions are considered non-arithmetic and therefore operating correctly in all cases, including in particular where the input operand is a NaN. These instructions are therefore always used for the respective operations. -mnan=2008 -mnan=legacy These options control the encoding of the special not-a- number (NaN) IEEE 754 floating-point data. The -mnan=legacy option selects the legacy encoding. In this case quiet NaNs (qNaNs) are denoted by the first bit of their trailing significand field being 0, whereas signaling NaNs (sNaNs) are denoted by the first bit of their trailing significand field being 1. The -mnan=2008 option selects the IEEE 754-2008 encoding. In this case qNaNs are denoted by the first bit of their trailing significand field being 1, whereas sNaNs are denoted by the first bit of their trailing significand field being 0. The default is -mnan=legacy unless GCC has been configured with --with-nan=2008. -mllsc -mno-llsc Use (do not use) ll, sc, and sync instructions to implement atomic memory built-in functions. When neither option is specified, GCC uses the instructions if the target architecture supports them. -mllsc is useful if the runtime environment can emulate the instructions and -mno-llsc can be useful when compiling for nonstandard ISAs. You can make either option the default by configuring GCC with --with-llsc and --without-llsc respectively. --with-llsc is the default for some configurations; see the installation documentation for details. -mdsp -mno-dsp Use (do not use) revision 1 of the MIPS DSP ASE. This option defines the preprocessor macro "__mips_dsp". It also defines "__mips_dsp_rev" to 1. -mdspr2 -mno-dspr2 Use (do not use) revision 2 of the MIPS DSP ASE. This option defines the preprocessor macros "__mips_dsp" and "__mips_dspr2". It also defines "__mips_dsp_rev" to 2. -msmartmips -mno-smartmips Use (do not use) the MIPS SmartMIPS ASE. -mpaired-single -mno-paired-single Use (do not use) paired-single floating-point instructions. This option requires hardware floating-point support to be enabled. -mdmx -mno-mdmx Use (do not use) MIPS Digital Media Extension instructions. This option can only be used when generating 64-bit code and requires hardware floating-point support to be enabled. -mips3d -mno-mips3d Use (do not use) the MIPS-3D ASE. The option -mips3d implies -mpaired-single. -mmicromips -mno-micromips Generate (do not generate) microMIPS code. MicroMIPS code generation can also be controlled on a per- function basis by means of "micromips" and "nomicromips" attributes. -mmt -mno-mt Use (do not use) MT Multithreading instructions. -mmcu -mno-mcu Use (do not use) the MIPS MCU ASE instructions. -meva -mno-eva Use (do not use) the MIPS Enhanced Virtual Addressing instructions. -mvirt -mno-virt Use (do not use) the MIPS Virtualization (VZ) instructions. -mxpa -mno-xpa Use (do not use) the MIPS eXtended Physical Address (XPA) instructions. -mcrc -mno-crc Use (do not use) the MIPS Cyclic Redundancy Check (CRC) instructions. -mginv -mno-ginv Use (do not use) the MIPS Global INValidate (GINV) instructions. -mloongson-mmi -mno-loongson-mmi Use (do not use) the MIPS Loongson MultiMedia extensions Instructions (MMI). -mloongson-ext -mno-loongson-ext Use (do not use) the MIPS Loongson EXTensions (EXT) instructions. -mloongson-ext2 -mno-loongson-ext2 Use (do not use) the MIPS Loongson EXTensions r2 (EXT2) instructions. -mlong64 Force "long" types to be 64 bits wide. See -mlong32 for an explanation of the default and the way that the pointer size is determined. -mlong32 Force "long", "int", and pointer types to be 32 bits wide. The default size of "int"s, "long"s and pointers depends on the ABI. All the supported ABIs use 32-bit "int"s. The n64 ABI uses 64-bit "long"s, as does the 64-bit EABI; the others use 32-bit "long"s. Pointers are the same size as "long"s, or the same size as integer registers, whichever is smaller. -msym32 -mno-sym32 Assume (do not assume) that all symbols have 32-bit values, regardless of the selected ABI. This option is useful in combination with -mabi=64 and -mno-abicalls because it allows GCC to generate shorter and faster references to symbolic addresses. -G num Put definitions of externally-visible data in a small data section if that data is no bigger than num bytes. GCC can then generate more efficient accesses to the data; see -mgpopt for details. The default -G option depends on the configuration. -mlocal-sdata -mno-local-sdata Extend (do not extend) the -G behavior to local data too, such as to static variables in C. -mlocal-sdata is the default for all configurations. If the linker complains that an application is using too much small data, you might want to try rebuilding the less performance-critical parts with -mno-local-sdata. You might also want to build large libraries with -mno-local-sdata, so that the libraries leave more room for the main program. -mextern-sdata -mno-extern-sdata Assume (do not assume) that externally-defined data is in a small data section if the size of that data is within the -G limit. -mextern-sdata is the default for all configurations. If you compile a module Mod with -mextern-sdata -G num -mgpopt, and Mod references a variable Var that is no bigger than num bytes, you must make sure that Var is placed in a small data section. If Var is defined by another module, you must either compile that module with a high-enough -G setting or attach a "section" attribute to Var's definition. If Var is common, you must link the application with a high-enough -G setting. The easiest way of satisfying these restrictions is to compile and link every module with the same -G option. However, you may wish to build a library that supports several different small data limits. You can do this by compiling the library with the highest supported -G setting and additionally using -mno-extern-sdata to stop the library from making assumptions about externally-defined data. -mgpopt -mno-gpopt Use (do not use) GP-relative accesses for symbols that are known to be in a small data section; see -G, -mlocal-sdata and -mextern-sdata. -mgpopt is the default for all configurations. -mno-gpopt is useful for cases where the $gp register might not hold the value of "_gp". For example, if the code is part of a library that might be used in a boot monitor, programs that call boot monitor routines pass an unknown value in $gp. (In such situations, the boot monitor itself is usually compiled with -G0.) -mno-gpopt implies -mno-local-sdata and -mno-extern-sdata. -membedded-data -mno-embedded-data Allocate variables to the read-only data section first if possible, then next in the small data section if possible, otherwise in data. This gives slightly slower code than the default, but reduces the amount of RAM required when executing, and thus may be preferred for some embedded systems. -muninit-const-in-rodata -mno-uninit-const-in-rodata Put uninitialized "const" variables in the read-only data section. This option is only meaningful in conjunction with -membedded-data. -mcode-readable=setting Specify whether GCC may generate code that reads from executable sections. There are three possible settings: -mcode-readable=yes Instructions may freely access executable sections. This is the default setting. -mcode-readable=pcrel MIPS16 PC-relative load instructions can access executable sections, but other instructions must not do so. This option is useful on 4KSc and 4KSd processors when the code TLBs have the Read Inhibit bit set. It is also useful on processors that can be configured to have a dual instruction/data SRAM interface and that, like the M4K, automatically redirect PC-relative loads to the instruction RAM. -mcode-readable=no Instructions must not access executable sections. This option can be useful on targets that are configured to have a dual instruction/data SRAM interface but that (unlike the M4K) do not automatically redirect PC- relative loads to the instruction RAM. -msplit-addresses -mno-split-addresses Enable (disable) use of the "%hi()" and "%lo()" assembler relocation operators. This option has been superseded by -mexplicit-relocs but is retained for backwards compatibility. -mexplicit-relocs -mno-explicit-relocs Use (do not use) assembler relocation operators when dealing with symbolic addresses. The alternative, selected by -mno-explicit-relocs, is to use assembler macros instead. -mexplicit-relocs is the default if GCC was configured to use an assembler that supports relocation operators. -mcheck-zero-division -mno-check-zero-division Trap (do not trap) on integer division by zero. The default is -mcheck-zero-division. -mdivide-traps -mdivide-breaks MIPS systems check for division by zero by generating either a conditional trap or a break instruction. Using traps results in smaller code, but is only supported on MIPS II and later. Also, some versions of the Linux kernel have a bug that prevents trap from generating the proper signal ("SIGFPE"). Use -mdivide-traps to allow conditional traps on architectures that support them and -mdivide-breaks to force the use of breaks. The default is usually -mdivide-traps, but this can be overridden at configure time using --with-divide=breaks. Divide-by-zero checks can be completely disabled using -mno-check-zero-division. -mload-store-pairs -mno-load-store-pairs Enable (disable) an optimization that pairs consecutive load or store instructions to enable load/store bonding. This option is enabled by default but only takes effect when the selected architecture is known to support bonding. -mmemcpy -mno-memcpy Force (do not force) the use of "memcpy" for non-trivial block moves. The default is -mno-memcpy, which allows GCC to inline most constant-sized copies. -mlong-calls -mno-long-calls Disable (do not disable) use of the "jal" instruction. Calling functions using "jal" is more efficient but requires the caller and callee to be in the same 256 megabyte segment. This option has no effect on abicalls code. The default is -mno-long-calls. -mmad -mno-mad Enable (disable) use of the "mad", "madu" and "mul" instructions, as provided by the R4650 ISA. -mimadd -mno-imadd Enable (disable) use of the "madd" and "msub" integer instructions. The default is -mimadd on architectures that support "madd" and "msub" except for the 74k architecture where it was found to generate slower code. -mfused-madd -mno-fused-madd Enable (disable) use of the floating-point multiply- accumulate instructions, when they are available. The default is -mfused-madd. On the R8000 CPU when multiply-accumulate instructions are used, the intermediate product is calculated to infinite precision and is not subject to the FCSR Flush to Zero bit. This may be undesirable in some circumstances. On other processors the result is numerically identical to the equivalent computation using separate multiply, add, subtract and negate instructions. -nocpp Tell the MIPS assembler to not run its preprocessor over user assembler files (with a .s suffix) when assembling them. -mfix-24k -mno-fix-24k Work around the 24K E48 (lost data on stores during refill) errata. The workarounds are implemented by the assembler rather than by GCC. -mfix-r4000 -mno-fix-r4000 Work around certain R4000 CPU errata: - A double-word or a variable shift may give an incorrect result if executed immediately after starting an integer division. - A double-word or a variable shift may give an incorrect result if executed while an integer multiplication is in progress. - An integer division may give an incorrect result if started in a delay slot of a taken branch or a jump. -mfix-r4400 -mno-fix-r4400 Work around certain R4400 CPU errata: - A double-word or a variable shift may give an incorrect result if executed immediately after starting an integer division. -mfix-r10000 -mno-fix-r10000 Work around certain R10000 errata: - "ll"/"sc" sequences may not behave atomically on revisions prior to 3.0. They may deadlock on revisions 2.6 and earlier. This option can only be used if the target architecture supports branch-likely instructions. -mfix-r10000 is the default when -march=r10000 is used; -mno-fix-r10000 is the default otherwise. -mfix-r5900 -mno-fix-r5900 Do not attempt to schedule the preceding instruction into the delay slot of a branch instruction placed at the end of a short loop of six instructions or fewer and always schedule a "nop" instruction there instead. The short loop bug under certain conditions causes loops to execute only once or twice, due to a hardware bug in the R5900 chip. The workaround is implemented by the assembler rather than by GCC. -mfix-rm7000 -mno-fix-rm7000 Work around the RM7000 "dmult"/"dmultu" errata. The workarounds are implemented by the assembler rather than by GCC. -mfix-vr4120 -mno-fix-vr4120 Work around certain VR4120 errata: - "dmultu" does not always produce the correct result. - "div" and "ddiv" do not always produce the correct result if one of the operands is negative. The workarounds for the division errata rely on special functions in libgcc.a. At present, these functions are only provided by the "mips64vr*-elf" configurations. Other VR4120 errata require a NOP to be inserted between certain pairs of instructions. These errata are handled by the assembler, not by GCC itself. -mfix-vr4130 Work around the VR4130 "mflo"/"mfhi" errata. The workarounds are implemented by the assembler rather than by GCC, although GCC avoids using "mflo" and "mfhi" if the VR4130 "macc", "macchi", "dmacc" and "dmacchi" instructions are available instead. -mfix-sb1 -mno-fix-sb1 Work around certain SB-1 CPU core errata. (This flag currently works around the SB-1 revision 2 "F1" and "F2" floating-point errata.) -mr10k-cache-barrier=setting Specify whether GCC should insert cache barriers to avoid the side effects of speculation on R10K processors. In common with many processors, the R10K tries to predict the outcome of a conditional branch and speculatively executes instructions from the "taken" branch. It later aborts these instructions if the predicted outcome is wrong. However, on the R10K, even aborted instructions can have side effects. This problem only affects kernel stores and, depending on the system, kernel loads. As an example, a speculatively- executed store may load the target memory into cache and mark the cache line as dirty, even if the store itself is later aborted. If a DMA operation writes to the same area of memory before the "dirty" line is flushed, the cached data overwrites the DMA-ed data. See the R10K processor manual for a full description, including other potential problems. One workaround is to insert cache barrier instructions before every memory access that might be speculatively executed and that might have side effects even if aborted. -mr10k-cache-barrier=setting controls GCC's implementation of this workaround. It assumes that aborted accesses to any byte in the following regions does not have side effects: 1. the memory occupied by the current function's stack frame; 2. the memory occupied by an incoming stack argument; 3. the memory occupied by an object with a link-time- constant address. It is the kernel's responsibility to ensure that speculative accesses to these regions are indeed safe. If the input program contains a function declaration such as: void foo (void); then the implementation of "foo" must allow "j foo" and "jal foo" to be executed speculatively. GCC honors this restriction for functions it compiles itself. It expects non-GCC functions (such as hand-written assembly code) to do the same. The option has three forms: -mr10k-cache-barrier=load-store Insert a cache barrier before a load or store that might be speculatively executed and that might have side effects even if aborted. -mr10k-cache-barrier=store Insert a cache barrier before a store that might be speculatively executed and that might have side effects even if aborted. -mr10k-cache-barrier=none Disable the insertion of cache barriers. This is the default setting. -mflush-func=func -mno-flush-func Specifies the function to call to flush the I and D caches, or to not call any such function. If called, the function must take the same arguments as the common "_flush_func", that is, the address of the memory range for which the cache is being flushed, the size of the memory range, and the number 3 (to flush both caches). The default depends on the target GCC was configured for, but commonly is either "_flush_func" or "__cpu_flush". mbranch-cost=num Set the cost of branches to roughly num "simple" instructions. This cost is only a heuristic and is not guaranteed to produce consistent results across releases. A zero cost redundantly selects the default, which is based on the -mtune setting. -mbranch-likely -mno-branch-likely Enable or disable use of Branch Likely instructions, regardless of the default for the selected architecture. By default, Branch Likely instructions may be generated if they are supported by the selected architecture. An exception is for the MIPS32 and MIPS64 architectures and processors that implement those architectures; for those, Branch Likely instructions are not be generated by default because the MIPS32 and MIPS64 architectures specifically deprecate their use. -mcompact-branches=never -mcompact-branches=optimal -mcompact-branches=always These options control which form of branches will be generated. The default is -mcompact-branches=optimal. The -mcompact-branches=never option ensures that compact branch instructions will never be generated. The -mcompact-branches=always option ensures that a compact branch instruction will be generated if available. If a compact branch instruction is not available, a delay slot form of the branch will be used instead. This option is supported from MIPS Release 6 onwards. The -mcompact-branches=optimal option will cause a delay slot branch to be used if one is available in the current ISA and the delay slot is successfully filled. If the delay slot is not filled, a compact branch will be chosen if one is available. -mfp-exceptions -mno-fp-exceptions Specifies whether FP exceptions are enabled. This affects how FP instructions are scheduled for some processors. The default is that FP exceptions are enabled. For instance, on the SB-1, if FP exceptions are disabled, and we are emitting 64-bit code, then we can use both FP pipes. Otherwise, we can only use one FP pipe. -mvr4130-align -mno-vr4130-align The VR4130 pipeline is two-way superscalar, but can only issue two instructions together if the first one is 8-byte aligned. When this option is enabled, GCC aligns pairs of instructions that it thinks should execute in parallel. This option only has an effect when optimizing for the VR4130. It normally makes code faster, but at the expense of making it bigger. It is enabled by default at optimization level -O3. -msynci -mno-synci Enable (disable) generation of "synci" instructions on architectures that support it. The "synci" instructions (if enabled) are generated when "__builtin___clear_cache" is compiled. This option defaults to -mno-synci, but the default can be overridden by configuring GCC with --with-synci. When compiling code for single processor systems, it is generally safe to use "synci". However, on many multi-core (SMP) systems, it does not invalidate the instruction caches on all cores and may lead to undefined behavior. -mrelax-pic-calls -mno-relax-pic-calls Try to turn PIC calls that are normally dispatched via register $25 into direct calls. This is only possible if the linker can resolve the destination at link time and if the destination is within range for a direct call. -mrelax-pic-calls is the default if GCC was configured to use an assembler and a linker that support the ".reloc" assembly directive and -mexplicit-relocs is in effect. With -mno-explicit-relocs, this optimization can be performed by the assembler and the linker alone without help from the compiler. -mmcount-ra-address -mno-mcount-ra-address Emit (do not emit) code that allows "_mcount" to modify the calling function's return address. When enabled, this option extends the usual "_mcount" interface with a new ra-address parameter, which has type "intptr_t *" and is passed in register $12. "_mcount" can then modify the return address by doing both of the following: * Returning the new address in register $31. * Storing the new address in "*ra-address", if ra-address is nonnull. The default is -mno-mcount-ra-address. -mframe-header-opt -mno-frame-header-opt Enable (disable) frame header optimization in the o32 ABI. When using the o32 ABI, calling functions will allocate 16 bytes on the stack for the called function to write out register arguments. When enabled, this optimization will suppress the allocation of the frame header if it can be determined that it is unused. This optimization is off by default at all optimization levels. -mlxc1-sxc1 -mno-lxc1-sxc1 When applicable, enable (disable) the generation of "lwxc1", "swxc1", "ldxc1", "sdxc1" instructions. Enabled by default. -mmadd4 -mno-madd4 When applicable, enable (disable) the generation of 4-operand "madd.s", "madd.d" and related instructions. Enabled by default. MMIX Options These options are defined for the MMIX: -mlibfuncs -mno-libfuncs Specify that intrinsic library functions are being compiled, passing all values in registers, no matter the size. -mepsilon -mno-epsilon Generate floating-point comparison instructions that compare with respect to the "rE" epsilon register. -mabi=mmixware -mabi=gnu Generate code that passes function parameters and return values that (in the called function) are seen as registers $0 and up, as opposed to the GNU ABI which uses global registers $231 and up. -mzero-extend -mno-zero-extend When reading data from memory in sizes shorter than 64 bits, use (do not use) zero-extending load instructions by default, rather than sign-extending ones. -mknuthdiv -mno-knuthdiv Make the result of a division yielding a remainder have the same sign as the divisor. With the default, -mno-knuthdiv, the sign of the remainder follows the sign of the dividend. Both methods are arithmetically valid, the latter being almost exclusively used. -mtoplevel-symbols -mno-toplevel-symbols Prepend (do not prepend) a : to all global symbols, so the assembly code can be used with the "PREFIX" assembly directive. -melf Generate an executable in the ELF format, rather than the default mmo format used by the mmix simulator. -mbranch-predict -mno-branch-predict Use (do not use) the probable-branch instructions, when static branch prediction indicates a probable branch. -mbase-addresses -mno-base-addresses Generate (do not generate) code that uses base addresses. Using a base address automatically generates a request (handled by the assembler and the linker) for a constant to be set up in a global register. The register is used for one or more base address requests within the range 0 to 255 from the value held in the register. The generally leads to short and fast code, but the number of different data items that can be addressed is limited. This means that a program that uses lots of static data may require -mno-base-addresses. -msingle-exit -mno-single-exit Force (do not force) generated code to have a single exit point in each function. MN10300 Options These -m options are defined for Matsushita MN10300 architectures: -mmult-bug Generate code to avoid bugs in the multiply instructions for the MN10300 processors. This is the default. -mno-mult-bug Do not generate code to avoid bugs in the multiply instructions for the MN10300 processors. -mam33 Generate code using features specific to the AM33 processor. -mno-am33 Do not generate code using features specific to the AM33 processor. This is the default. -mam33-2 Generate code using features specific to the AM33/2.0 processor. -mam34 Generate code using features specific to the AM34 processor. -mtune=cpu-type Use the timing characteristics of the indicated CPU type when scheduling instructions. This does not change the targeted processor type. The CPU type must be one of mn10300, am33, am33-2 or am34. -mreturn-pointer-on-d0 When generating a function that returns a pointer, return the pointer in both "a0" and "d0". Otherwise, the pointer is returned only in "a0", and attempts to call such functions without a prototype result in errors. Note that this option is on by default; use -mno-return-pointer-on-d0 to disable it. -mno-crt0 Do not link in the C run-time initialization object file. -mrelax Indicate to the linker that it should perform a relaxation optimization pass to shorten branches, calls and absolute memory addresses. This option only has an effect when used on the command line for the final link step. This option makes symbolic debugging impossible. -mliw Allow the compiler to generate Long Instruction Word instructions if the target is the AM33 or later. This is the default. This option defines the preprocessor macro "__LIW__". -mno-liw Do not allow the compiler to generate Long Instruction Word instructions. This option defines the preprocessor macro "__NO_LIW__". -msetlb Allow the compiler to generate the SETLB and Lcc instructions if the target is the AM33 or later. This is the default. This option defines the preprocessor macro "__SETLB__". -mno-setlb Do not allow the compiler to generate SETLB or Lcc instructions. This option defines the preprocessor macro "__NO_SETLB__". Moxie Options -meb Generate big-endian code. This is the default for moxie-*-* configurations. -mel Generate little-endian code. -mmul.x Generate mul.x and umul.x instructions. This is the default for moxiebox-*-* configurations. -mno-crt0 Do not link in the C run-time initialization object file. MSP430 Options These options are defined for the MSP430: -masm-hex Force assembly output to always use hex constants. Normally such constants are signed decimals, but this option is available for testsuite and/or aesthetic purposes. -mmcu= Select the MCU to target. This is used to create a C preprocessor symbol based upon the MCU name, converted to upper case and pre- and post-fixed with __. This in turn is used by the msp430.h header file to select an MCU-specific supplementary header file. The option also sets the ISA to use. If the MCU name is one that is known to only support the 430 ISA then that is selected, otherwise the 430X ISA is selected. A generic MCU name of msp430 can also be used to select the 430 ISA. Similarly the generic msp430x MCU name selects the 430X ISA. In addition an MCU-specific linker script is added to the linker command line. The script's name is the name of the MCU with .ld appended. Thus specifying -mmcu=xxx on the gcc command line defines the C preprocessor symbol "__XXX__" and cause the linker to search for a script called xxx.ld. This option is also passed on to the assembler. -mwarn-mcu -mno-warn-mcu This option enables or disables warnings about conflicts between the MCU name specified by the -mmcu option and the ISA set by the -mcpu option and/or the hardware multiply support set by the -mhwmult option. It also toggles warnings about unrecognized MCU names. This option is on by default. -mcpu= Specifies the ISA to use. Accepted values are msp430, msp430x and msp430xv2. This option is deprecated. The -mmcu= option should be used to select the ISA. -msim Link to the simulator runtime libraries and linker script. Overrides any scripts that would be selected by the -mmcu= option. -mlarge Use large-model addressing (20-bit pointers, 32-bit "size_t"). -msmall Use small-model addressing (16-bit pointers, 16-bit "size_t"). -mrelax This option is passed to the assembler and linker, and allows the linker to perform certain optimizations that cannot be done until the final link. mhwmult= Describes the type of hardware multiply supported by the target. Accepted values are none for no hardware multiply, 16bit for the original 16-bit-only multiply supported by early MCUs. 32bit for the 16/32-bit multiply supported by later MCUs and f5series for the 16/32-bit multiply supported by F5-series MCUs. A value of auto can also be given. This tells GCC to deduce the hardware multiply support based upon the MCU name provided by the -mmcu option. If no -mmcu option is specified or if the MCU name is not recognized then no hardware multiply support is assumed. "auto" is the default setting. Hardware multiplies are normally performed by calling a library routine. This saves space in the generated code. When compiling at -O3 or higher however the hardware multiplier is invoked inline. This makes for bigger, but faster code. The hardware multiply routines disable interrupts whilst running and restore the previous interrupt state when they finish. This makes them safe to use inside interrupt handlers as well as in normal code. -minrt Enable the use of a minimum runtime environment - no static initializers or constructors. This is intended for memory- constrained devices. The compiler includes special symbols in some objects that tell the linker and runtime which code fragments are required. -mcode-region= -mdata-region= These options tell the compiler where to place functions and data that do not have one of the "lower", "upper", "either" or "section" attributes. Possible values are "lower", "upper", "either" or "any". The first three behave like the corresponding attribute. The fourth possible value - "any" - is the default. It leaves placement entirely up to the linker script and how it assigns the standard sections (".text", ".data", etc) to the memory regions. -msilicon-errata= This option passes on a request to assembler to enable the fixes for the named silicon errata. -msilicon-errata-warn= This option passes on a request to the assembler to enable warning messages when a silicon errata might need to be applied. NDS32 Options These options are defined for NDS32 implementations: -mbig-endian Generate code in big-endian mode. -mlittle-endian Generate code in little-endian mode. -mreduced-regs Use reduced-set registers for register allocation. -mfull-regs Use full-set registers for register allocation. -mcmov Generate conditional move instructions. -mno-cmov Do not generate conditional move instructions. -mext-perf Generate performance extension instructions. -mno-ext-perf Do not generate performance extension instructions. -mext-perf2 Generate performance extension 2 instructions. -mno-ext-perf2 Do not generate performance extension 2 instructions. -mext-string Generate string extension instructions. -mno-ext-string Do not generate string extension instructions. -mv3push Generate v3 push25/pop25 instructions. -mno-v3push Do not generate v3 push25/pop25 instructions. -m16-bit Generate 16-bit instructions. -mno-16-bit Do not generate 16-bit instructions. -misr-vector-size=num Specify the size of each interrupt vector, which must be 4 or 16. -mcache-block-size=num Specify the size of each cache block, which must be a power of 2 between 4 and 512. -march=arch Specify the name of the target architecture. -mcmodel=code-model Set the code model to one of small All the data and read-only data segments must be within 512KB addressing space. The text segment must be within 16MB addressing space. medium The data segment must be within 512KB while the read-only data segment can be within 4GB addressing space. The text segment should be still within 16MB addressing space. large All the text and data segments can be within 4GB addressing space. -mctor-dtor Enable constructor/destructor feature. -mrelax Guide linker to relax instructions. Nios II Options These are the options defined for the Altera Nios II processor. -G num Put global and static objects less than or equal to num bytes into the small data or BSS sections instead of the normal data or BSS sections. The default value of num is 8. -mgpopt=option -mgpopt -mno-gpopt Generate (do not generate) GP-relative accesses. The following option names are recognized: none Do not generate GP-relative accesses. local Generate GP-relative accesses for small data objects that are not external, weak, or uninitialized common symbols. Also use GP-relative addressing for objects that have been explicitly placed in a small data section via a "section" attribute. global As for local, but also generate GP-relative accesses for small data objects that are external, weak, or common. If you use this option, you must ensure that all parts of your program (including libraries) are compiled with the same -G setting. data Generate GP-relative accesses for all data objects in the program. If you use this option, the entire data and BSS segments of your program must fit in 64K of memory and you must use an appropriate linker script to allocate them within the addressable range of the global pointer. all Generate GP-relative addresses for function pointers as well as data pointers. If you use this option, the entire text, data, and BSS segments of your program must fit in 64K of memory and you must use an appropriate linker script to allocate them within the addressable range of the global pointer. -mgpopt is equivalent to -mgpopt=local, and -mno-gpopt is equivalent to -mgpopt=none. The default is -mgpopt except when -fpic or -fPIC is specified to generate position-independent code. Note that the Nios II ABI does not permit GP-relative accesses from shared libraries. You may need to specify -mno-gpopt explicitly when building programs that include large amounts of small data, including large GOT data sections. In this case, the 16-bit offset for GP-relative addressing may not be large enough to allow access to the entire small data section. -mgprel-sec=regexp This option specifies additional section names that can be accessed via GP-relative addressing. It is most useful in conjunction with "section" attributes on variable declarations and a custom linker script. The regexp is a POSIX Extended Regular Expression. This option does not affect the behavior of the -G option, and the specified sections are in addition to the standard ".sdata" and ".sbss" small-data sections that are recognized by -mgpopt. -mr0rel-sec=regexp This option specifies names of sections that can be accessed via a 16-bit offset from "r0"; that is, in the low 32K or high 32K of the 32-bit address space. It is most useful in conjunction with "section" attributes on variable declarations and a custom linker script. The regexp is a POSIX Extended Regular Expression. In contrast to the use of GP-relative addressing for small data, zero-based addressing is never generated by default and there are no conventional section names used in standard linker scripts for sections in the low or high areas of memory. -mel -meb Generate little-endian (default) or big-endian (experimental) code, respectively. -march=arch This specifies the name of the target Nios II architecture. GCC uses this name to determine what kind of instructions it can emit when generating assembly code. Permissible names are: r1, r2. The preprocessor macro "__nios2_arch__" is available to programs, with value 1 or 2, indicating the targeted ISA level. -mbypass-cache -mno-bypass-cache Force all load and store instructions to always bypass cache by using I/O variants of the instructions. The default is not to bypass the cache. -mno-cache-volatile -mcache-volatile Volatile memory access bypass the cache using the I/O variants of the load and store instructions. The default is not to bypass the cache. -mno-fast-sw-div -mfast-sw-div Do not use table-based fast divide for small numbers. The default is to use the fast divide at -O3 and above. -mno-hw-mul -mhw-mul -mno-hw-mulx -mhw-mulx -mno-hw-div -mhw-div Enable or disable emitting "mul", "mulx" and "div" family of instructions by the compiler. The default is to emit "mul" and not emit "div" and "mulx". -mbmx -mno-bmx -mcdx -mno-cdx Enable or disable generation of Nios II R2 BMX (bit manipulation) and CDX (code density) instructions. Enabling these instructions also requires -march=r2. Since these instructions are optional extensions to the R2 architecture, the default is not to emit them. -mcustom-insn=N -mno-custom-insn Each -mcustom-insn=N option enables use of a custom instruction with encoding N when generating code that uses insn. For example, -mcustom-fadds=253 generates custom instruction 253 for single-precision floating-point add operations instead of the default behavior of using a library call. The following values of insn are supported. Except as otherwise noted, floating-point operations are expected to be implemented with normal IEEE 754 semantics and correspond directly to the C operators or the equivalent GCC built-in functions. Single-precision floating point: fadds, fsubs, fdivs, fmuls Binary arithmetic operations. fnegs Unary negation. fabss Unary absolute value. fcmpeqs, fcmpges, fcmpgts, fcmples, fcmplts, fcmpnes Comparison operations. fmins, fmaxs Floating-point minimum and maximum. These instructions are only generated if -ffinite-math-only is specified. fsqrts Unary square root operation. fcoss, fsins, ftans, fatans, fexps, flogs Floating-point trigonometric and exponential functions. These instructions are only generated if -funsafe-math-optimizations is also specified. Double-precision floating point: faddd, fsubd, fdivd, fmuld Binary arithmetic operations. fnegd Unary negation. fabsd Unary absolute value. fcmpeqd, fcmpged, fcmpgtd, fcmpled, fcmpltd, fcmpned Comparison operations. fmind, fmaxd Double-precision minimum and maximum. These instructions are only generated if -ffinite-math-only is specified. fsqrtd Unary square root operation. fcosd, fsind, ftand, fatand, fexpd, flogd Double-precision trigonometric and exponential functions. These instructions are only generated if -funsafe-math-optimizations is also specified. Conversions: fextsd Conversion from single precision to double precision. ftruncds Conversion from double precision to single precision. fixsi, fixsu, fixdi, fixdu Conversion from floating point to signed or unsigned integer types, with truncation towards zero. round Conversion from single-precision floating point to signed integer, rounding to the nearest integer and ties away from zero. This corresponds to the "__builtin_lroundf" function when -fno-math-errno is used. floatis, floatus, floatid, floatud Conversion from signed or unsigned integer types to floating-point types. In addition, all of the following transfer instructions for internal registers X and Y must be provided to use any of the double-precision floating-point instructions. Custom instructions taking two double-precision source operands expect the first operand in the 64-bit register X. The other operand (or only operand of a unary operation) is given to the custom arithmetic instruction with the least significant half in source register src1 and the most significant half in src2. A custom instruction that returns a double-precision result returns the most significant 32 bits in the destination register and the other half in 32-bit register Y. GCC automatically generates the necessary code sequences to write register X and/or read register Y when double-precision floating-point instructions are used. fwrx Write src1 into the least significant half of X and src2 into the most significant half of X. fwry Write src1 into Y. frdxhi, frdxlo Read the most or least (respectively) significant half of X and store it in dest. frdy Read the value of Y and store it into dest. Note that you can gain more local control over generation of Nios II custom instructions by using the "target("custom-insn=N")" and "target("no-custom-insn")" function attributes or pragmas. -mcustom-fpu-cfg=name This option enables a predefined, named set of custom instruction encodings (see -mcustom-insn above). Currently, the following sets are defined: -mcustom-fpu-cfg=60-1 is equivalent to: -mcustom-fmuls=252 -mcustom-fadds=253 -mcustom-fsubs=254 -fsingle-precision-constant -mcustom-fpu-cfg=60-2 is equivalent to: -mcustom-fmuls=252 -mcustom-fadds=253 -mcustom-fsubs=254 -mcustom-fdivs=255 -fsingle-precision-constant -mcustom-fpu-cfg=72-3 is equivalent to: -mcustom-floatus=243 -mcustom-fixsi=244 -mcustom-floatis=245 -mcustom-fcmpgts=246 -mcustom-fcmples=249 -mcustom-fcmpeqs=250 -mcustom-fcmpnes=251 -mcustom-fmuls=252 -mcustom-fadds=253 -mcustom-fsubs=254 -mcustom-fdivs=255 -fsingle-precision-constant Custom instruction assignments given by individual -mcustom-insn= options override those given by -mcustom-fpu-cfg=, regardless of the order of the options on the command line. Note that you can gain more local control over selection of a FPU configuration by using the "target("custom-fpu-cfg=name")" function attribute or pragma. These additional -m options are available for the Altera Nios II ELF (bare-metal) target: -mhal Link with HAL BSP. This suppresses linking with the GCC- provided C runtime startup and termination code, and is typically used in conjunction with -msys-crt0= to specify the location of the alternate startup code provided by the HAL BSP. -msmallc Link with a limited version of the C library, -lsmallc, rather than Newlib. -msys-crt0=startfile startfile is the file name of the startfile (crt0) to use when linking. This option is only useful in conjunction with -mhal. -msys-lib=systemlib systemlib is the library name of the library that provides low-level system calls required by the C library, e.g. "read" and "write". This option is typically used to link with a library provided by a HAL BSP. Nvidia PTX Options These options are defined for Nvidia PTX: -m32 -m64 Generate code for 32-bit or 64-bit ABI. -misa=ISA-string Generate code for given the specified PTX ISA (e.g. sm_35). ISA strings must be lower-case. Valid ISA strings include sm_30 and sm_35. The default ISA is sm_30. -mmainkernel Link in code for a __main kernel. This is for stand-alone instead of offloading execution. -moptimize Apply partitioned execution optimizations. This is the default when any level of optimization is selected. -msoft-stack Generate code that does not use ".local" memory directly for stack storage. Instead, a per-warp stack pointer is maintained explicitly. This enables variable-length stack allocation (with variable-length arrays or "alloca"), and when global memory is used for underlying storage, makes it possible to access automatic variables from other threads, or with atomic instructions. This code generation variant is used for OpenMP offloading, but the option is exposed on its own for the purpose of testing the compiler; to generate code suitable for linking into programs using OpenMP offloading, use option -mgomp. -muniform-simt Switch to code generation variant that allows to execute all threads in each warp, while maintaining memory state and side effects as if only one thread in each warp was active outside of OpenMP SIMD regions. All atomic operations and calls to runtime (malloc, free, vprintf) are conditionally executed (iff current lane index equals the master lane index), and the register being assigned is copied via a shuffle instruction from the master lane. Outside of SIMD regions lane 0 is the master; inside, each thread sees itself as the master. Shared memory array "int __nvptx_uni[]" stores all- zeros or all-ones bitmasks for each warp, indicating current mode (0 outside of SIMD regions). Each thread can bitwise- and the bitmask at position "tid.y" with current lane index to compute the master lane index. -mgomp Generate code for use in OpenMP offloading: enables -msoft-stack and -muniform-simt options, and selects corresponding multilib variant. OpenRISC Options These options are defined for OpenRISC: -mboard=name Configure a board specific runtime. This will be passed to the linker for newlib board library linking. The default is "or1ksim". -mnewlib For compatibility, it's always newlib for elf now. -mhard-div Generate code for hardware which supports divide instructions. This is the default. -mhard-mul Generate code for hardware which supports multiply instructions. This is the default. -mcmov Generate code for hardware which supports the conditional move ("l.cmov") instruction. -mror Generate code for hardware which supports rotate right instructions. -msext Generate code for hardware which supports sign-extension instructions. -msfimm Generate code for hardware which supports set flag immediate ("l.sf*i") instructions. -mshftimm Generate code for hardware which supports shift immediate related instructions (i.e. "l.srai", "l.srli", "l.slli", "1.rori"). Note, to enable generation of the "l.rori" instruction the -mror flag must also be specified. -msoft-div Generate code for hardware which requires divide instruction emulation. -msoft-mul Generate code for hardware which requires multiply instruction emulation. PDP-11 Options These options are defined for the PDP-11: -mfpu Use hardware FPP floating point. This is the default. (FIS floating point on the PDP-11/40 is not supported.) Implies -m45. -msoft-float Do not use hardware floating point. -mac0 Return floating-point results in ac0 (fr0 in Unix assembler syntax). -mno-ac0 Return floating-point results in memory. This is the default. -m40 Generate code for a PDP-11/40. Implies -msoft-float -mno-split. -m45 Generate code for a PDP-11/45. This is the default. -m10 Generate code for a PDP-11/10. Implies -msoft-float -mno-split. -mint16 -mno-int32 Use 16-bit "int". This is the default. -mint32 -mno-int16 Use 32-bit "int". -msplit Target has split instruction and data space. Implies -m45. -munix-asm Use Unix assembler syntax. -mdec-asm Use DEC assembler syntax. -mgnu-asm Use GNU assembler syntax. This is the default. -mlra Use the new LRA register allocator. By default, the old "reload" allocator is used. picoChip Options These -m options are defined for picoChip implementations: -mae=ae_type Set the instruction set, register set, and instruction scheduling parameters for array element type ae_type. Supported values for ae_type are ANY, MUL, and MAC. -mae=ANY selects a completely generic AE type. Code generated with this option runs on any of the other AE types. The code is not as efficient as it would be if compiled for a specific AE type, and some types of operation (e.g., multiplication) do not work properly on all types of AE. -mae=MUL selects a MUL AE type. This is the most useful AE type for compiled code, and is the default. -mae=MAC selects a DSP-style MAC AE. Code compiled with this option may suffer from poor performance of byte (char) manipulation, since the DSP AE does not provide hardware support for byte load/stores. -msymbol-as-address Enable the compiler to directly use a symbol name as an address in a load/store instruction, without first loading it into a register. Typically, the use of this option generates larger programs, which run faster than when the option isn't used. However, the results vary from program to program, so it is left as a user option, rather than being permanently enabled. -mno-inefficient-warnings Disables warnings about the generation of inefficient code. These warnings can be generated, for example, when compiling code that performs byte-level memory operations on the MAC AE type. The MAC AE has no hardware support for byte-level memory operations, so all byte load/stores must be synthesized from word load/store operations. This is inefficient and a warning is generated to indicate that you should rewrite the code to avoid byte operations, or to target an AE type that has the necessary hardware support. This option disables these warnings. PowerPC Options These are listed under RISC-V Options These command-line options are defined for RISC-V targets: -mbranch-cost=n Set the cost of branches to roughly n instructions. -mplt -mno-plt When generating PIC code, do or don't allow the use of PLTs. Ignored for non-PIC. The default is -mplt. -mabi=ABI-string Specify integer and floating-point calling convention. ABI- string contains two parts: the size of integer types and the registers used for floating-point types. For example -march=rv64ifd -mabi=lp64d means that long and pointers are 64-bit (implicitly defining int to be 32-bit), and that floating-point values up to 64 bits wide are passed in F registers. Contrast this with -march=rv64ifd -mabi=lp64f, which still allows the compiler to generate code that uses the F and D extensions but only allows floating-point values up to 32 bits long to be passed in registers; or -march=rv64ifd -mabi=lp64, in which no floating-point arguments will be passed in registers. The default for this argument is system dependent, users who want a specific calling convention should specify one explicitly. The valid calling conventions are: ilp32, ilp32f, ilp32d, lp64, lp64f, and lp64d. Some calling conventions are impossible to implement on some ISAs: for example, -march=rv32if -mabi=ilp32d is invalid because the ABI requires 64-bit values be passed in F registers, but F registers are only 32 bits wide. There is also the ilp32e ABI that can only be used with the rv32e architecture. This ABI is not well specified at present, and is subject to change. -mfdiv -mno-fdiv Do or don't use hardware floating-point divide and square root instructions. This requires the F or D extensions for floating-point registers. The default is to use them if the specified architecture has these instructions. -mdiv -mno-div Do or don't use hardware instructions for integer division. This requires the M extension. The default is to use them if the specified architecture has these instructions. -march=ISA-string Generate code for given RISC-V ISA (e.g. rv64im). ISA strings must be lower-case. Examples include rv64i, rv32g, rv32e, and rv32imaf. -mtune=processor-string Optimize the output for the given processor, specified by microarchitecture name. Permissible values for this option are: rocket, sifive-3-series, sifive-5-series, sifive-7-series, and size. When -mtune= is not specified, the default is rocket. The size choice is not intended for use by end-users. This is used when -Os is specified. It overrides the instruction cost info provided by -mtune=, but does not override the pipeline info. This helps reduce code size while still giving good performance. -mpreferred-stack-boundary=num Attempt to keep the stack boundary aligned to a 2 raised to num byte boundary. If -mpreferred-stack-boundary is not specified, the default is 4 (16 bytes or 128-bits). Warning: If you use this switch, then you must build all modules with the same value, including any libraries. This includes the system libraries and startup modules. -msmall-data-limit=n Put global and static data smaller than n bytes into a special section (on some targets). -msave-restore -mno-save-restore Do or don't use smaller but slower prologue and epilogue code that uses library function calls. The default is to use fast inline prologues and epilogues. -mstrict-align -mno-strict-align Do not or do generate unaligned memory accesses. The default is set depending on whether the processor we are optimizing for supports fast unaligned access or not. -mcmodel=medlow Generate code for the medium-low code model. The program and its statically defined symbols must lie within a single 2 GiB address range and must lie between absolute addresses -2 GiB and +2 GiB. Programs can be statically or dynamically linked. This is the default code model. -mcmodel=medany Generate code for the medium-any code model. The program and its statically defined symbols must be within any single 2 GiB address range. Programs can be statically or dynamically linked. -mexplicit-relocs -mno-exlicit-relocs Use or do not use assembler relocation operators when dealing with symbolic addresses. The alternative is to use assembler macros instead, which may limit optimization. -mrelax -mno-relax Take advantage of linker relaxations to reduce the number of instructions required to materialize symbol addresses. The default is to take advantage of linker relaxations. -memit-attribute -mno-emit-attribute Emit (do not emit) RISC-V attribute to record extra information into ELF objects. This feature requires at least binutils 2.32. RL78 Options -msim Links in additional target libraries to support operation within a simulator. -mmul=none -mmul=g10 -mmul=g13 -mmul=g14 -mmul=rl78 Specifies the type of hardware multiplication and division support to be used. The simplest is "none", which uses software for both multiplication and division. This is the default. The "g13" value is for the hardware multiply/divide peripheral found on the RL78/G13 (S2 core) targets. The "g14" value selects the use of the multiplication and division instructions supported by the RL78/G14 (S3 core) parts. The value "rl78" is an alias for "g14" and the value "mg10" is an alias for "none". In addition a C preprocessor macro is defined, based upon the setting of this option. Possible values are: "__RL78_MUL_NONE__", "__RL78_MUL_G13__" or "__RL78_MUL_G14__". -mcpu=g10 -mcpu=g13 -mcpu=g14 -mcpu=rl78 Specifies the RL78 core to target. The default is the G14 core, also known as an S3 core or just RL78. The G13 or S2 core does not have multiply or divide instructions, instead it uses a hardware peripheral for these operations. The G10 or S1 core does not have register banks, so it uses a different calling convention. If this option is set it also selects the type of hardware multiply support to use, unless this is overridden by an explicit -mmul=none option on the command line. Thus specifying -mcpu=g13 enables the use of the G13 hardware multiply peripheral and specifying -mcpu=g10 disables the use of hardware multiplications altogether. Note, although the RL78/G14 core is the default target, specifying -mcpu=g14 or -mcpu=rl78 on the command line does change the behavior of the toolchain since it also enables G14 hardware multiply support. If these options are not specified on the command line then software multiplication routines will be used even though the code targets the RL78 core. This is for backwards compatibility with older toolchains which did not have hardware multiply and divide support. In addition a C preprocessor macro is defined, based upon the setting of this option. Possible values are: "__RL78_G10__", "__RL78_G13__" or "__RL78_G14__". -mg10 -mg13 -mg14 -mrl78 These are aliases for the corresponding -mcpu= option. They are provided for backwards compatibility. -mallregs Allow the compiler to use all of the available registers. By default registers "r24..r31" are reserved for use in interrupt handlers. With this option enabled these registers can be used in ordinary functions as well. -m64bit-doubles -m32bit-doubles Make the "double" data type be 64 bits (-m64bit-doubles) or 32 bits (-m32bit-doubles) in size. The default is -m32bit-doubles. -msave-mduc-in-interrupts -mno-save-mduc-in-interrupts Specifies that interrupt handler functions should preserve the MDUC registers. This is only necessary if normal code might use the MDUC registers, for example because it performs multiplication and division operations. The default is to ignore the MDUC registers as this makes the interrupt handlers faster. The target option -mg13 needs to be passed for this to work as this feature is only available on the G13 target (S2 core). The MDUC registers will only be saved if the interrupt handler performs a multiplication or division operation or it calls another function. IBM RS/6000 and PowerPC Options These -m options are defined for the IBM RS/6000 and PowerPC: -mpowerpc-gpopt -mno-powerpc-gpopt -mpowerpc-gfxopt -mno-powerpc-gfxopt -mpowerpc64 -mno-powerpc64 -mmfcrf -mno-mfcrf -mpopcntb -mno-popcntb -mpopcntd -mno-popcntd -mfprnd -mno-fprnd -mcmpb -mno-cmpb -mmfpgpr -mno-mfpgpr -mhard-dfp -mno-hard-dfp You use these options to specify which instructions are available on the processor you are using. The default value of these options is determined when configuring GCC. Specifying the -mcpu=cpu_type overrides the specification of these options. We recommend you use the -mcpu=cpu_type option rather than the options listed above. Specifying -mpowerpc-gpopt allows GCC to use the optional PowerPC architecture instructions in the General Purpose group, including floating-point square root. Specifying -mpowerpc-gfxopt allows GCC to use the optional PowerPC architecture instructions in the Graphics group, including floating-point select. The -mmfcrf option allows GCC to generate the move from condition register field instruction implemented on the POWER4 processor and other processors that support the PowerPC V2.01 architecture. The -mpopcntb option allows GCC to generate the popcount and double-precision FP reciprocal estimate instruction implemented on the POWER5 processor and other processors that support the PowerPC V2.02 architecture. The -mpopcntd option allows GCC to generate the popcount instruction implemented on the POWER7 processor and other processors that support the PowerPC V2.06 architecture. The -mfprnd option allows GCC to generate the FP round to integer instructions implemented on the POWER5+ processor and other processors that support the PowerPC V2.03 architecture. The -mcmpb option allows GCC to generate the compare bytes instruction implemented on the POWER6 processor and other processors that support the PowerPC V2.05 architecture. The -mmfpgpr option allows GCC to generate the FP move to/from general-purpose register instructions implemented on the POWER6X processor and other processors that support the extended PowerPC V2.05 architecture. The -mhard-dfp option allows GCC to generate the decimal floating-point instructions implemented on some POWER processors. The -mpowerpc64 option allows GCC to generate the additional 64-bit instructions that are found in the full PowerPC64 architecture and to treat GPRs as 64-bit, doubleword quantities. GCC defaults to -mno-powerpc64. -mcpu=cpu_type Set architecture type, register usage, and instruction scheduling parameters for machine type cpu_type. Supported values for cpu_type are 401, 403, 405, 405fp, 440, 440fp, 464, 464fp, 476, 476fp, 505, 601, 602, 603, 603e, 604, 604e, 620, 630, 740, 7400, 7450, 750, 801, 821, 823, 860, 970, 8540, a2, e300c2, e300c3, e500mc, e500mc64, e5500, e6500, ec603e, G3, G4, G5, titan, power3, power4, power5, power5+, power6, power6x, power7, power8, power9, powerpc, powerpc64, powerpc64le, rs64, and native. -mcpu=powerpc, -mcpu=powerpc64, and -mcpu=powerpc64le specify pure 32-bit PowerPC (either endian), 64-bit big endian PowerPC and 64-bit little endian PowerPC architecture machine types, with an appropriate, generic processor model assumed for scheduling purposes. Specifying native as cpu type detects and selects the architecture option that corresponds to the host processor of the system performing the compilation. -mcpu=native has no effect if GCC does not recognize the processor. The other options specify a specific processor. Code generated under those options runs best on that processor, and may not run at all on others. The -mcpu options automatically enable or disable the following options: -maltivec -mfprnd -mhard-float -mmfcrf -mmultiple -mpopcntb -mpopcntd -mpowerpc64 -mpowerpc-gpopt -mpowerpc-gfxopt -mmulhw -mdlmzb -mmfpgpr -mvsx -mcrypto -mhtm -mpower8-fusion -mpower8-vector -mquad-memory -mquad-memory-atomic -mfloat128 -mfloat128-hardware The particular options set for any particular CPU varies between compiler versions, depending on what setting seems to produce optimal code for that CPU; it doesn't necessarily reflect the actual hardware's capabilities. If you wish to set an individual option to a particular value, you may specify it after the -mcpu option, like -mcpu=970 -mno-altivec. On AIX, the -maltivec and -mpowerpc64 options are not enabled or disabled by the -mcpu option at present because AIX does not have full support for these options. You may still enable or disable them individually if you're sure it'll work in your environment. -mtune=cpu_type Set the instruction scheduling parameters for machine type cpu_type, but do not set the architecture type or register usage, as -mcpu=cpu_type does. The same values for cpu_type are used for -mtune as for -mcpu. If both are specified, the code generated uses the architecture and registers set by -mcpu, but the scheduling parameters set by -mtune. -mcmodel=small Generate PowerPC64 code for the small model: The TOC is limited to 64k. -mcmodel=medium Generate PowerPC64 code for the medium model: The TOC and other static data may be up to a total of 4G in size. This is the default for 64-bit Linux. -mcmodel=large Generate PowerPC64 code for the large model: The TOC may be up to 4G in size. Other data and code is only limited by the 64-bit address space. -maltivec -mno-altivec Generate code that uses (does not use) AltiVec instructions, and also enable the use of built-in functions that allow more direct access to the AltiVec instruction set. You may also need to set -mabi=altivec to adjust the current ABI with AltiVec ABI enhancements. When -maltivec is used, the element order for AltiVec intrinsics such as "vec_splat", "vec_extract", and "vec_insert" match array element order corresponding to the endianness of the target. That is, element zero identifies the leftmost element in a vector register when targeting a big-endian platform, and identifies the rightmost element in a vector register when targeting a little-endian platform. -mvrsave -mno-vrsave Generate VRSAVE instructions when generating AltiVec code. -msecure-plt Generate code that allows ld and ld.so to build executables and shared libraries with non-executable ".plt" and ".got" sections. This is a PowerPC 32-bit SYSV ABI option. -mbss-plt Generate code that uses a BSS ".plt" section that ld.so fills in, and requires ".plt" and ".got" sections that are both writable and executable. This is a PowerPC 32-bit SYSV ABI option. -misel -mno-isel This switch enables or disables the generation of ISEL instructions. -mvsx -mno-vsx Generate code that uses (does not use) vector/scalar (VSX) instructions, and also enable the use of built-in functions that allow more direct access to the VSX instruction set. -mcrypto -mno-crypto Enable the use (disable) of the built-in functions that allow direct access to the cryptographic instructions that were added in version 2.07 of the PowerPC ISA. -mhtm -mno-htm Enable (disable) the use of the built-in functions that allow direct access to the Hardware Transactional Memory (HTM) instructions that were added in version 2.07 of the PowerPC ISA. -mpower8-fusion -mno-power8-fusion Generate code that keeps (does not keeps) some integer operations adjacent so that the instructions can be fused together on power8 and later processors. -mpower8-vector -mno-power8-vector Generate code that uses (does not use) the vector and scalar instructions that were added in version 2.07 of the PowerPC ISA. Also enable the use of built-in functions that allow more direct access to the vector instructions. -mquad-memory -mno-quad-memory Generate code that uses (does not use) the non-atomic quad word memory instructions. The -mquad-memory option requires use of 64-bit mode. -mquad-memory-atomic -mno-quad-memory-atomic Generate code that uses (does not use) the atomic quad word memory instructions. The -mquad-memory-atomic option requires use of 64-bit mode. -mfloat128 -mno-float128 Enable/disable the __float128 keyword for IEEE 128-bit floating point and use either software emulation for IEEE 128-bit floating point or hardware instructions. The VSX instruction set (-mvsx, -mcpu=power7, -mcpu=power8), or -mcpu=power9 must be enabled to use the IEEE 128-bit floating point support. The IEEE 128-bit floating point support only works on PowerPC Linux systems. The default for -mfloat128 is enabled on PowerPC Linux systems using the VSX instruction set, and disabled on other systems. If you use the ISA 3.0 instruction set (-mpower9-vector or -mcpu=power9) on a 64-bit system, the IEEE 128-bit floating point support will also enable the generation of ISA 3.0 IEEE 128-bit floating point instructions. Otherwise, if you do not specify to generate ISA 3.0 instructions or you are targeting a 32-bit big endian system, IEEE 128-bit floating point will be done with software emulation. -mfloat128-hardware -mno-float128-hardware Enable/disable using ISA 3.0 hardware instructions to support the __float128 data type. The default for -mfloat128-hardware is enabled on PowerPC Linux systems using the ISA 3.0 instruction set, and disabled on other systems. -m32 -m64 Generate code for 32-bit or 64-bit environments of Darwin and SVR4 targets (including GNU/Linux). The 32-bit environment sets int, long and pointer to 32 bits and generates code that runs on any PowerPC variant. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits, and generates code for PowerPC64, as for -mpowerpc64. -mfull-toc -mno-fp-in-toc -mno-sum-in-toc -mminimal-toc Modify generation of the TOC (Table Of Contents), which is created for every executable file. The -mfull-toc option is selected by default. In that case, GCC allocates at least one TOC entry for each unique non-automatic variable reference in your program. GCC also places floating-point constants in the TOC. However, only 16,384 entries are available in the TOC. If you receive a linker error message that saying you have overflowed the available TOC space, you can reduce the amount of TOC space used with the -mno-fp-in-toc and -mno-sum-in-toc options. -mno-fp-in-toc prevents GCC from putting floating- point constants in the TOC and -mno-sum-in-toc forces GCC to generate code to calculate the sum of an address and a constant at run time instead of putting that sum into the TOC. You may specify one or both of these options. Each causes GCC to produce very slightly slower and larger code at the expense of conserving TOC space. If you still run out of space in the TOC even when you specify both of these options, specify -mminimal-toc instead. This option causes GCC to make only one TOC entry for every file. When you specify this option, GCC produces code that is slower and larger but which uses extremely little TOC space. You may wish to use this option only on files that contain less frequently-executed code. -maix64 -maix32 Enable 64-bit AIX ABI and calling convention: 64-bit pointers, 64-bit "long" type, and the infrastructure needed to support them. Specifying -maix64 implies -mpowerpc64, while -maix32 disables the 64-bit ABI and implies -mno-powerpc64. GCC defaults to -maix32. -mxl-compat -mno-xl-compat Produce code that conforms more closely to IBM XL compiler semantics when using AIX-compatible ABI. Pass floating-point arguments to prototyped functions beyond the register save area (RSA) on the stack in addition to argument FPRs. Do not assume that most significant double in 128-bit long double value is properly rounded when comparing values and converting to double. Use XL symbol names for long double support routines. The AIX calling convention was extended but not initially documented to handle an obscure K&R C case of calling a function that takes the address of its arguments with fewer arguments than declared. IBM XL compilers access floating- point arguments that do not fit in the RSA from the stack when a subroutine is compiled without optimization. Because always storing floating-point arguments on the stack is inefficient and rarely needed, this option is not enabled by default and only is necessary when calling subroutines compiled by IBM XL compilers without optimization. -mpe Support IBM RS/6000 SP Parallel Environment (PE). Link an application written to use message passing with special startup code to enable the application to run. The system must have PE installed in the standard location (/usr/lpp/ppe.poe/), or the specs file must be overridden with the -specs= option to specify the appropriate directory location. The Parallel Environment does not support threads, so the -mpe option and the -pthread option are incompatible. -malign-natural -malign-power On AIX, 32-bit Darwin, and 64-bit PowerPC GNU/Linux, the option -malign-natural overrides the ABI-defined alignment of larger types, such as floating-point doubles, on their natural size-based boundary. The option -malign-power instructs GCC to follow the ABI-specified alignment rules. GCC defaults to the standard alignment defined in the ABI. On 64-bit Darwin, natural alignment is the default, and -malign-power is not supported. -msoft-float -mhard-float Generate code that does not use (uses) the floating-point register set. Software floating-point emulation is provided if you use the -msoft-float option, and pass the option to GCC when linking. -mmultiple -mno-multiple Generate code that uses (does not use) the load multiple word instructions and the store multiple word instructions. These instructions are generated by default on POWER systems, and not generated on PowerPC systems. Do not use -mmultiple on little-endian PowerPC systems, since those instructions do not work when the processor is in little-endian mode. The exceptions are PPC740 and PPC750 which permit these instructions in little-endian mode. -mupdate -mno-update Generate code that uses (does not use) the load or store instructions that update the base register to the address of the calculated memory location. These instructions are generated by default. If you use -mno-update, there is a small window between the time that the stack pointer is updated and the address of the previous frame is stored, which means code that walks the stack frame across interrupts or signals may get corrupted data. -mavoid-indexed-addresses -mno-avoid-indexed-addresses Generate code that tries to avoid (not avoid) the use of indexed load or store instructions. These instructions can incur a performance penalty on Power6 processors in certain situations, such as when stepping through large arrays that cross a 16M boundary. This option is enabled by default when targeting Power6 and disabled otherwise. -mfused-madd -mno-fused-madd Generate code that uses (does not use) the floating-point multiply and accumulate instructions. These instructions are generated by default if hardware floating point is used. The machine-dependent -mfused-madd option is now mapped to the machine-independent -ffp-contract=fast option, and -mno-fused-madd is mapped to -ffp-contract=off. -mmulhw -mno-mulhw Generate code that uses (does not use) the half-word multiply and multiply-accumulate instructions on the IBM 405, 440, 464 and 476 processors. These instructions are generated by default when targeting those processors. -mdlmzb -mno-dlmzb Generate code that uses (does not use) the string-search dlmzb instruction on the IBM 405, 440, 464 and 476 processors. This instruction is generated by default when targeting those processors. -mno-bit-align -mbit-align On System V.4 and embedded PowerPC systems do not (do) force structures and unions that contain bit-fields to be aligned to the base type of the bit-field. For example, by default a structure containing nothing but 8 "unsigned" bit-fields of length 1 is aligned to a 4-byte boundary and has a size of 4 bytes. By using -mno-bit-align, the structure is aligned to a 1-byte boundary and is 1 byte in size. -mno-strict-align -mstrict-align On System V.4 and embedded PowerPC systems do not (do) assume that unaligned memory references are handled by the system. -mrelocatable -mno-relocatable Generate code that allows (does not allow) a static executable to be relocated to a different address at run time. A simple embedded PowerPC system loader should relocate the entire contents of ".got2" and 4-byte locations listed in the ".fixup" section, a table of 32-bit addresses generated by this option. For this to work, all objects linked together must be compiled with -mrelocatable or -mrelocatable-lib. -mrelocatable code aligns the stack to an 8-byte boundary. -mrelocatable-lib -mno-relocatable-lib Like -mrelocatable, -mrelocatable-lib generates a ".fixup" section to allow static executables to be relocated at run time, but -mrelocatable-lib does not use the smaller stack alignment of -mrelocatable. Objects compiled with -mrelocatable-lib may be linked with objects compiled with any combination of the -mrelocatable options. -mno-toc -mtoc On System V.4 and embedded PowerPC systems do not (do) assume that register 2 contains a pointer to a global area pointing to the addresses used in the program. -mlittle -mlittle-endian On System V.4 and embedded PowerPC systems compile code for the processor in little-endian mode. The -mlittle-endian option is the same as -mlittle. -mbig -mbig-endian On System V.4 and embedded PowerPC systems compile code for the processor in big-endian mode. The -mbig-endian option is the same as -mbig. -mdynamic-no-pic On Darwin and Mac OS X systems, compile code so that it is not relocatable, but that its external references are relocatable. The resulting code is suitable for applications, but not shared libraries. -msingle-pic-base Treat the register used for PIC addressing as read-only, rather than loading it in the prologue for each function. The runtime system is responsible for initializing this register with an appropriate value before execution begins. -mprioritize-restricted-insns=priority This option controls the priority that is assigned to dispatch-slot restricted instructions during the second scheduling pass. The argument priority takes the value 0, 1, or 2 to assign no, highest, or second-highest (respectively) priority to dispatch-slot restricted instructions. -msched-costly-dep=dependence_type This option controls which dependences are considered costly by the target during instruction scheduling. The argument dependence_type takes one of the following values: no No dependence is costly. all All dependences are costly. true_store_to_load A true dependence from store to load is costly. store_to_load Any dependence from store to load is costly. number Any dependence for which the latency is greater than or equal to number is costly. -minsert-sched-nops=scheme This option controls which NOP insertion scheme is used during the second scheduling pass. The argument scheme takes one of the following values: no Don't insert NOPs. pad Pad with NOPs any dispatch group that has vacant issue slots, according to the scheduler's grouping. regroup_exact Insert NOPs to force costly dependent insns into separate groups. Insert exactly as many NOPs as needed to force an insn to a new group, according to the estimated processor grouping. number Insert NOPs to force costly dependent insns into separate groups. Insert number NOPs to force an insn to a new group. -mcall-sysv On System V.4 and embedded PowerPC systems compile code using calling conventions that adhere to the March 1995 draft of the System V Application Binary Interface, PowerPC processor supplement. This is the default unless you configured GCC using powerpc-*-eabiaix. -mcall-sysv-eabi -mcall-eabi Specify both -mcall-sysv and -meabi options. -mcall-sysv-noeabi Specify both -mcall-sysv and -mno-eabi options. -mcall-aixdesc On System V.4 and embedded PowerPC systems compile code for the AIX operating system. -mcall-linux On System V.4 and embedded PowerPC systems compile code for the Linux-based GNU system. -mcall-freebsd On System V.4 and embedded PowerPC systems compile code for the FreeBSD operating system. -mcall-netbsd On System V.4 and embedded PowerPC systems compile code for the NetBSD operating system. -mcall-openbsd On System V.4 and embedded PowerPC systems compile code for the OpenBSD operating system. -mtraceback=traceback_type Select the type of traceback table. Valid values for traceback_type are full, part, and no. -maix-struct-return Return all structures in memory (as specified by the AIX ABI). -msvr4-struct-return Return structures smaller than 8 bytes in registers (as specified by the SVR4 ABI). -mabi=abi-type Extend the current ABI with a particular extension, or remove such extension. Valid values are altivec, no-altivec, ibmlongdouble, ieeelongdouble, elfv1, elfv2. -mabi=ibmlongdouble Change the current ABI to use IBM extended-precision long double. This is not likely to work if your system defaults to using IEEE extended-precision long double. If you change the long double type from IEEE extended-precision, the compiler will issue a warning unless you use the -Wno-psabi option. Requires -mlong-double-128 to be enabled. -mabi=ieeelongdouble Change the current ABI to use IEEE extended-precision long double. This is not likely to work if your system defaults to using IBM extended-precision long double. If you change the long double type from IBM extended-precision, the compiler will issue a warning unless you use the -Wno-psabi option. Requires -mlong-double-128 to be enabled. -mabi=elfv1 Change the current ABI to use the ELFv1 ABI. This is the default ABI for big-endian PowerPC 64-bit Linux. Overriding the default ABI requires special system support and is likely to fail in spectacular ways. -mabi=elfv2 Change the current ABI to use the ELFv2 ABI. This is the default ABI for little-endian PowerPC 64-bit Linux. Overriding the default ABI requires special system support and is likely to fail in spectacular ways. -mgnu-attribute -mno-gnu-attribute Emit .gnu_attribute assembly directives to set tag/value pairs in a .gnu.attributes section that specify ABI variations in function parameters or return values. -mprototype -mno-prototype On System V.4 and embedded PowerPC systems assume that all calls to variable argument functions are properly prototyped. Otherwise, the compiler must insert an instruction before every non-prototyped call to set or clear bit 6 of the condition code register ("CR") to indicate whether floating- point values are passed in the floating-point registers in case the function takes variable arguments. With -mprototype, only calls to prototyped variable argument functions set or clear the bit. -msim On embedded PowerPC systems, assume that the startup module is called sim-crt0.o and that the standard C libraries are libsim.a and libc.a. This is the default for powerpc-*-eabisim configurations. -mmvme On embedded PowerPC systems, assume that the startup module is called crt0.o and the standard C libraries are libmvme.a and libc.a. -mads On embedded PowerPC systems, assume that the startup module is called crt0.o and the standard C libraries are libads.a and libc.a. -myellowknife On embedded PowerPC systems, assume that the startup module is called crt0.o and the standard C libraries are libyk.a and libc.a. -mvxworks On System V.4 and embedded PowerPC systems, specify that you are compiling for a VxWorks system. -memb On embedded PowerPC systems, set the "PPC_EMB" bit in the ELF flags header to indicate that eabi extended relocations are used. -meabi -mno-eabi On System V.4 and embedded PowerPC systems do (do not) adhere to the Embedded Applications Binary Interface (EABI), which is a set of modifications to the System V.4 specifications. Selecting -meabi means that the stack is aligned to an 8-byte boundary, a function "__eabi" is called from "main" to set up the EABI environment, and the -msdata option can use both "r2" and "r13" to point to two separate small data areas. Selecting -mno-eabi means that the stack is aligned to a 16-byte boundary, no EABI initialization function is called from "main", and the -msdata option only uses "r13" to point to a single small data area. The -meabi option is on by default if you configured GCC using one of the powerpc*-*-eabi* options. -msdata=eabi On System V.4 and embedded PowerPC systems, put small initialized "const" global and static data in the ".sdata2" section, which is pointed to by register "r2". Put small initialized non-"const" global and static data in the ".sdata" section, which is pointed to by register "r13". Put small uninitialized global and static data in the ".sbss" section, which is adjacent to the ".sdata" section. The -msdata=eabi option is incompatible with the -mrelocatable option. The -msdata=eabi option also sets the -memb option. -msdata=sysv On System V.4 and embedded PowerPC systems, put small global and static data in the ".sdata" section, which is pointed to by register "r13". Put small uninitialized global and static data in the ".sbss" section, which is adjacent to the ".sdata" section. The -msdata=sysv option is incompatible with the -mrelocatable option. -msdata=default -msdata On System V.4 and embedded PowerPC systems, if -meabi is used, compile code the same as -msdata=eabi, otherwise compile code the same as -msdata=sysv. -msdata=data On System V.4 and embedded PowerPC systems, put small global data in the ".sdata" section. Put small uninitialized global data in the ".sbss" section. Do not use register "r13" to address small data however. This is the default behavior unless other -msdata options are used. -msdata=none -mno-sdata On embedded PowerPC systems, put all initialized global and static data in the ".data" section, and all uninitialized data in the ".bss" section. -mreadonly-in-sdata Put read-only objects in the ".sdata" section as well. This is the default. -mblock-move-inline-limit=num Inline all block moves (such as calls to "memcpy" or structure copies) less than or equal to num bytes. The minimum value for num is 32 bytes on 32-bit targets and 64 bytes on 64-bit targets. The default value is target- specific. -mblock-compare-inline-limit=num Generate non-looping inline code for all block compares (such as calls to "memcmp" or structure compares) less than or equal to num bytes. If num is 0, all inline expansion (non- loop and loop) of block compare is disabled. The default value is target-specific. -mblock-compare-inline-loop-limit=num Generate an inline expansion using loop code for all block compares that are less than or equal to num bytes, but greater than the limit for non-loop inline block compare expansion. If the block length is not constant, at most num bytes will be compared before "memcmp" is called to compare the remainder of the block. The default value is target- specific. -mstring-compare-inline-limit=num Compare at most num string bytes with inline code. If the difference or end of string is not found at the end of the inline compare a call to "strcmp" or "strncmp" will take care of the rest of the comparison. The default is 64 bytes. -G num On embedded PowerPC systems, put global and static items less than or equal to num bytes into the small data or BSS sections instead of the normal data or BSS section. By default, num is 8. The -G num switch is also passed to the linker. All modules should be compiled with the same -G num value. -mregnames -mno-regnames On System V.4 and embedded PowerPC systems do (do not) emit register names in the assembly language output using symbolic forms. -mlongcall -mno-longcall By default assume that all calls are far away so that a longer and more expensive calling sequence is required. This is required for calls farther than 32 megabytes (33,554,432 bytes) from the current location. A short call is generated if the compiler knows the call cannot be that far away. This setting can be overridden by the "shortcall" function attribute, or by "#pragma longcall(0)". Some linkers are capable of detecting out-of-range calls and generating glue code on the fly. On these systems, long calls are unnecessary and generate slower code. As of this writing, the AIX linker can do this, as can the GNU linker for PowerPC/64. It is planned to add this feature to the GNU linker for 32-bit PowerPC systems as well. On PowerPC64 ELFv2 and 32-bit PowerPC systems with newer GNU linkers, GCC can generate long calls using an inline PLT call sequence (see -mpltseq). PowerPC with -mbss-plt and PowerPC64 ELFv1 (big-endian) do not support inline PLT calls. On Darwin/PPC systems, "#pragma longcall" generates "jbsr callee, L42", plus a branch island (glue code). The two target addresses represent the callee and the branch island. The Darwin/PPC linker prefers the first address and generates a "bl callee" if the PPC "bl" instruction reaches the callee directly; otherwise, the linker generates "bl L42" to call the branch island. The branch island is appended to the body of the calling function; it computes the full 32-bit address of the callee and jumps to it. On Mach-O (Darwin) systems, this option directs the compiler emit to the glue for every direct call, and the Darwin linker decides whether to use or discard it. In the future, GCC may ignore all longcall specifications when the linker is known to generate glue. -mpltseq -mno-pltseq Implement (do not implement) -fno-plt and long calls using an inline PLT call sequence that supports lazy linking and long calls to functions in dlopen'd shared libraries. Inline PLT calls are only supported on PowerPC64 ELFv2 and 32-bit PowerPC systems with newer GNU linkers, and are enabled by default if the support is detected when configuring GCC, and, in the case of 32-bit PowerPC, if GCC is configured with --enable-secureplt. -mpltseq code and -mbss-plt 32-bit PowerPC relocatable objects may not be linked together. -mtls-markers -mno-tls-markers Mark (do not mark) calls to "__tls_get_addr" with a relocation specifying the function argument. The relocation allows the linker to reliably associate function call with argument setup instructions for TLS optimization, which in turn allows GCC to better schedule the sequence. -mrecip -mno-recip This option enables use of the reciprocal estimate and reciprocal square root estimate instructions with additional Newton-Raphson steps to increase precision instead of doing a divide or square root and divide for floating-point arguments. You should use the -ffast-math option when using -mrecip (or at least -funsafe-math-optimizations, -ffinite-math-only, -freciprocal-math and -fno-trapping-math). Note that while the throughput of the sequence is generally higher than the throughput of the non- reciprocal instruction, the precision of the sequence can be decreased by up to 2 ulp (i.e. the inverse of 1.0 equals 0.99999994) for reciprocal square roots. -mrecip=opt This option controls which reciprocal estimate instructions may be used. opt is a comma-separated list of options, which may be preceded by a "!" to invert the option: all Enable all estimate instructions. default Enable the default instructions, equivalent to -mrecip. none Disable all estimate instructions, equivalent to -mno-recip. div Enable the reciprocal approximation instructions for both single and double precision. divf Enable the single-precision reciprocal approximation instructions. divd Enable the double-precision reciprocal approximation instructions. rsqrt Enable the reciprocal square root approximation instructions for both single and double precision. rsqrtf Enable the single-precision reciprocal square root approximation instructions. rsqrtd Enable the double-precision reciprocal square root approximation instructions. So, for example, -mrecip=all,!rsqrtd enables all of the reciprocal estimate instructions, except for the "FRSQRTE", "XSRSQRTEDP", and "XVRSQRTEDP" instructions which handle the double-precision reciprocal square root calculations. -mrecip-precision -mno-recip-precision Assume (do not assume) that the reciprocal estimate instructions provide higher-precision estimates than is mandated by the PowerPC ABI. Selecting -mcpu=power6, -mcpu=power7 or -mcpu=power8 automatically selects -mrecip-precision. The double-precision square root estimate instructions are not generated by default on low-precision machines, since they do not provide an estimate that converges after three steps. -mveclibabi=type Specifies the ABI type to use for vectorizing intrinsics using an external library. The only type supported at present is mass, which specifies to use IBM's Mathematical Acceleration Subsystem (MASS) libraries for vectorizing intrinsics using external libraries. GCC currently emits calls to "acosd2", "acosf4", "acoshd2", "acoshf4", "asind2", "asinf4", "asinhd2", "asinhf4", "atan2d2", "atan2f4", "atand2", "atanf4", "atanhd2", "atanhf4", "cbrtd2", "cbrtf4", "cosd2", "cosf4", "coshd2", "coshf4", "erfcd2", "erfcf4", "erfd2", "erff4", "exp2d2", "exp2f4", "expd2", "expf4", "expm1d2", "expm1f4", "hypotd2", "hypotf4", "lgammad2", "lgammaf4", "log10d2", "log10f4", "log1pd2", "log1pf4", "log2d2", "log2f4", "logd2", "logf4", "powd2", "powf4", "sind2", "sinf4", "sinhd2", "sinhf4", "sqrtd2", "sqrtf4", "tand2", "tanf4", "tanhd2", and "tanhf4" when generating code for power7. Both -ftree-vectorize and -funsafe-math-optimizations must also be enabled. The MASS libraries must be specified at link time. -mfriz -mno-friz Generate (do not generate) the "friz" instruction when the -funsafe-math-optimizations option is used to optimize rounding of floating-point values to 64-bit integer and back to floating point. The "friz" instruction does not return the same value if the floating-point number is too large to fit in an integer. -mpointers-to-nested-functions -mno-pointers-to-nested-functions Generate (do not generate) code to load up the static chain register ("r11") when calling through a pointer on AIX and 64-bit Linux systems where a function pointer points to a 3-word descriptor giving the function address, TOC value to be loaded in register "r2", and static chain value to be loaded in register "r11". The -mpointers-to-nested-functions is on by default. You cannot call through pointers to nested functions or pointers to functions compiled in other languages that use the static chain if you use -mno-pointers-to-nested-functions. -msave-toc-indirect -mno-save-toc-indirect Generate (do not generate) code to save the TOC value in the reserved stack location in the function prologue if the function calls through a pointer on AIX and 64-bit Linux systems. If the TOC value is not saved in the prologue, it is saved just before the call through the pointer. The -mno-save-toc-indirect option is the default. -mcompat-align-parm -mno-compat-align-parm Generate (do not generate) code to pass structure parameters with a maximum alignment of 64 bits, for compatibility with older versions of GCC. Older versions of GCC (prior to 4.9.0) incorrectly did not align a structure parameter on a 128-bit boundary when that structure contained a member requiring 128-bit alignment. This is corrected in more recent versions of GCC. This option may be used to generate code that is compatible with functions compiled with older versions of GCC. The -mno-compat-align-parm option is the default. -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset -mstack-protector-guard-symbol=symbol Generate stack protection code using canary at guard. Supported locations are global for global canary or tls for per-thread canary in the TLS block (the default with GNU libc version 2.4 or later). With the latter choice the options -mstack-protector-guard-reg=reg and -mstack-protector-guard-offset=offset furthermore specify which register to use as base register for reading the canary, and from what offset from that base register. The default for those is as specified in the relevant ABI. -mstack-protector-guard-symbol=symbol overrides the offset with a symbol reference to a canary in the TLS block. RX Options These command-line options are defined for RX targets: -m64bit-doubles -m32bit-doubles Make the "double" data type be 64 bits (-m64bit-doubles) or 32 bits (-m32bit-doubles) in size. The default is -m32bit-doubles. Note RX floating-point hardware only works on 32-bit values, which is why the default is -m32bit-doubles. -fpu -nofpu Enables (-fpu) or disables (-nofpu) the use of RX floating- point hardware. The default is enabled for the RX600 series and disabled for the RX200 series. Floating-point instructions are only generated for 32-bit floating-point values, however, so the FPU hardware is not used for doubles if the -m64bit-doubles option is used. Note If the -fpu option is enabled then -funsafe-math-optimizations is also enabled automatically. This is because the RX FPU instructions are themselves unsafe. -mcpu=name Selects the type of RX CPU to be targeted. Currently three types are supported, the generic RX600 and RX200 series hardware and the specific RX610 CPU. The default is RX600. The only difference between RX600 and RX610 is that the RX610 does not support the "MVTIPL" instruction. The RX200 series does not have a hardware floating-point unit and so -nofpu is enabled by default when this type is selected. -mbig-endian-data -mlittle-endian-data Store data (but not code) in the big-endian format. The default is -mlittle-endian-data, i.e. to store data in the little-endian format. -msmall-data-limit=N Specifies the maximum size in bytes of global and static variables which can be placed into the small data area. Using the small data area can lead to smaller and faster code, but the size of area is limited and it is up to the programmer to ensure that the area does not overflow. Also when the small data area is used one of the RX's registers (usually "r13") is reserved for use pointing to this area, so it is no longer available for use by the compiler. This could result in slower and/or larger code if variables are pushed onto the stack instead of being held in this register. Note, common variables (variables that have not been initialized) and constants are not placed into the small data area as they are assigned to other sections in the output executable. The default value is zero, which disables this feature. Note, this feature is not enabled by default with higher optimization levels (-O2 etc) because of the potentially detrimental effects of reserving a register. It is up to the programmer to experiment and discover whether this feature is of benefit to their program. See the description of the -mpid option for a description of how the actual register to hold the small data area pointer is chosen. -msim -mno-sim Use the simulator runtime. The default is to use the libgloss board-specific runtime. -mas100-syntax -mno-as100-syntax When generating assembler output use a syntax that is compatible with Renesas's AS100 assembler. This syntax can also be handled by the GAS assembler, but it has some restrictions so it is not generated by default. -mmax-constant-size=N Specifies the maximum size, in bytes, of a constant that can be used as an operand in a RX instruction. Although the RX instruction set does allow constants of up to 4 bytes in length to be used in instructions, a longer value equates to a longer instruction. Thus in some circumstances it can be beneficial to restrict the size of constants that are used in instructions. Constants that are too big are instead placed into a constant pool and referenced via register indirection. The value N can be between 0 and 4. A value of 0 (the default) or 4 means that constants of any size are allowed. -mrelax Enable linker relaxation. Linker relaxation is a process whereby the linker attempts to reduce the size of a program by finding shorter versions of various instructions. Disabled by default. -mint-register=N Specify the number of registers to reserve for fast interrupt handler functions. The value N can be between 0 and 4. A value of 1 means that register "r13" is reserved for the exclusive use of fast interrupt handlers. A value of 2 reserves "r13" and "r12". A value of 3 reserves "r13", "r12" and "r11", and a value of 4 reserves "r13" through "r10". A value of 0, the default, does not reserve any registers. -msave-acc-in-interrupts Specifies that interrupt handler functions should preserve the accumulator register. This is only necessary if normal code might use the accumulator register, for example because it performs 64-bit multiplications. The default is to ignore the accumulator as this makes the interrupt handlers faster. -mpid -mno-pid Enables the generation of position independent data. When enabled any access to constant data is done via an offset from a base address held in a register. This allows the location of constant data to be determined at run time without requiring the executable to be relocated, which is a benefit to embedded applications with tight memory constraints. Data that can be modified is not affected by this option. Note, using this feature reserves a register, usually "r13", for the constant data base address. This can result in slower and/or larger code, especially in complicated functions. The actual register chosen to hold the constant data base address depends upon whether the -msmall-data-limit and/or the -mint-register command-line options are enabled. Starting with register "r13" and proceeding downwards, registers are allocated first to satisfy the requirements of -mint-register, then -mpid and finally -msmall-data-limit. Thus it is possible for the small data area register to be "r8" if both -mint-register=4 and -mpid are specified on the command line. By default this feature is not enabled. The default can be restored via the -mno-pid command-line option. -mno-warn-multiple-fast-interrupts -mwarn-multiple-fast-interrupts Prevents GCC from issuing a warning message if it finds more than one fast interrupt handler when it is compiling a file. The default is to issue a warning for each extra fast interrupt handler found, as the RX only supports one such interrupt. -mallow-string-insns -mno-allow-string-insns Enables or disables the use of the string manipulation instructions "SMOVF", "SCMPU", "SMOVB", "SMOVU", "SUNTIL" "SWHILE" and also the "RMPA" instruction. These instructions may prefetch data, which is not safe to do if accessing an I/O register. (See section 12.2.7 of the RX62N Group User's Manual for more information). The default is to allow these instructions, but it is not possible for GCC to reliably detect all circumstances where a string instruction might be used to access an I/O register, so their use cannot be disabled automatically. Instead it is reliant upon the programmer to use the -mno-allow-string-insns option if their program accesses I/O space. When the instructions are enabled GCC defines the C preprocessor symbol "__RX_ALLOW_STRING_INSNS__", otherwise it defines the symbol "__RX_DISALLOW_STRING_INSNS__". -mjsr -mno-jsr Use only (or not only) "JSR" instructions to access functions. This option can be used when code size exceeds the range of "BSR" instructions. Note that -mno-jsr does not mean to not use "JSR" but instead means that any type of branch may be used. Note: The generic GCC command-line option -ffixed-reg has special significance to the RX port when used with the "interrupt" function attribute. This attribute indicates a function intended to process fast interrupts. GCC ensures that it only uses the registers "r10", "r11", "r12" and/or "r13" and only provided that the normal use of the corresponding registers have been restricted via the -ffixed-reg or -mint-register command-line options. S/390 and zSeries Options These are the -m options defined for the S/390 and zSeries architecture. -mhard-float -msoft-float Use (do not use) the hardware floating-point instructions and registers for floating-point operations. When -msoft-float is specified, functions in libgcc.a are used to perform floating-point operations. When -mhard-float is specified, the compiler generates IEEE floating-point instructions. This is the default. -mhard-dfp -mno-hard-dfp Use (do not use) the hardware decimal-floating-point instructions for decimal-floating-point operations. When -mno-hard-dfp is specified, functions in libgcc.a are used to perform decimal-floating-point operations. When -mhard-dfp is specified, the compiler generates decimal-floating-point hardware instructions. This is the default for -march=z9-ec or higher. -mlong-double-64 -mlong-double-128 These switches control the size of "long double" type. A size of 64 bits makes the "long double" type equivalent to the "double" type. This is the default. -mbackchain -mno-backchain Store (do not store) the address of the caller's frame as backchain pointer into the callee's stack frame. A backchain may be needed to allow debugging using tools that do not understand DWARF call frame information. When -mno-packed-stack is in effect, the backchain pointer is stored at the bottom of the stack frame; when -mpacked-stack is in effect, the backchain is placed into the topmost word of the 96/160 byte register save area. In general, code compiled with -mbackchain is call-compatible with code compiled with -mmo-backchain; however, use of the backchain for debugging purposes usually requires that the whole binary is built with -mbackchain. Note that the combination of -mbackchain, -mpacked-stack and -mhard-float is not supported. In order to build a linux kernel use -msoft-float. The default is to not maintain the backchain. -mpacked-stack -mno-packed-stack Use (do not use) the packed stack layout. When -mno-packed-stack is specified, the compiler uses the all fields of the 96/160 byte register save area only for their default purpose; unused fields still take up stack space. When -mpacked-stack is specified, register save slots are densely packed at the top of the register save area; unused space is reused for other purposes, allowing for more efficient use of the available stack space. However, when -mbackchain is also in effect, the topmost word of the save area is always used to store the backchain, and the return address register is always saved two words below the backchain. As long as the stack frame backchain is not used, code generated with -mpacked-stack is call-compatible with code generated with -mno-packed-stack. Note that some non-FSF releases of GCC 2.95 for S/390 or zSeries generated code that uses the stack frame backchain at run time, not just for debugging purposes. Such code is not call-compatible with code compiled with -mpacked-stack. Also, note that the combination of -mbackchain, -mpacked-stack and -mhard-float is not supported. In order to build a linux kernel use -msoft-float. The default is to not use the packed stack layout. -msmall-exec -mno-small-exec Generate (or do not generate) code using the "bras" instruction to do subroutine calls. This only works reliably if the total executable size does not exceed 64k. The default is to use the "basr" instruction instead, which does not have this limitation. -m64 -m31 When -m31 is specified, generate code compliant to the GNU/Linux for S/390 ABI. When -m64 is specified, generate code compliant to the GNU/Linux for zSeries ABI. This allows GCC in particular to generate 64-bit instructions. For the s390 targets, the default is -m31, while the s390x targets default to -m64. -mzarch -mesa When -mzarch is specified, generate code using the instructions available on z/Architecture. When -mesa is specified, generate code using the instructions available on ESA/390. Note that -mesa is not possible with -m64. When generating code compliant to the GNU/Linux for S/390 ABI, the default is -mesa. When generating code compliant to the GNU/Linux for zSeries ABI, the default is -mzarch. -mhtm -mno-htm The -mhtm option enables a set of builtins making use of instructions available with the transactional execution facility introduced with the IBM zEnterprise EC12 machine generation S/390 System z Built-in Functions. -mhtm is enabled by default when using -march=zEC12. -mvx -mno-vx When -mvx is specified, generate code using the instructions available with the vector extension facility introduced with the IBM z13 machine generation. This option changes the ABI for some vector type values with regard to alignment and calling conventions. In case vector type values are being used in an ABI-relevant context a GAS .gnu_attribute command will be added to mark the resulting binary with the ABI used. -mvx is enabled by default when using -march=z13. -mzvector -mno-zvector The -mzvector option enables vector language extensions and builtins using instructions available with the vector extension facility introduced with the IBM z13 machine generation. This option adds support for vector to be used as a keyword to define vector type variables and arguments. vector is only available when GNU extensions are enabled. It will not be expanded when requesting strict standard compliance e.g. with -std=c99. In addition to the GCC low- level builtins -mzvector enables a set of builtins added for compatibility with AltiVec-style implementations like Power and Cell. In order to make use of these builtins the header file vecintrin.h needs to be included. -mzvector is disabled by default. -mmvcle -mno-mvcle Generate (or do not generate) code using the "mvcle" instruction to perform block moves. When -mno-mvcle is specified, use a "mvc" loop instead. This is the default unless optimizing for size. -mdebug -mno-debug Print (or do not print) additional debug information when compiling. The default is to not print debug information. -march=cpu-type Generate code that runs on cpu-type, which is the name of a system representing a certain processor type. Possible values for cpu-type are z900/arch5, z990/arch6, z9-109, z9-ec/arch7, z10/arch8, z196/arch9, zEC12, z13/arch11, z14/arch12, and native. The default is -march=z900. Specifying native as cpu type can be used to select the best architecture option for the host processor. -march=native has no effect if GCC does not recognize the processor. -mtune=cpu-type Tune to cpu-type everything applicable about the generated code, except for the ABI and the set of available instructions. The list of cpu-type values is the same as for -march. The default is the value used for -march. -mtpf-trace -mno-tpf-trace Generate code that adds (does not add) in TPF OS specific branches to trace routines in the operating system. This option is off by default, even when compiling for the TPF OS. -mfused-madd -mno-fused-madd Generate code that uses (does not use) the floating-point multiply and accumulate instructions. These instructions are generated by default if hardware floating point is used. -mwarn-framesize=framesize Emit a warning if the current function exceeds the given frame size. Because this is a compile-time check it doesn't need to be a real problem when the program runs. It is intended to identify functions that most probably cause a stack overflow. It is useful to be used in an environment with limited stack size e.g. the linux kernel. -mwarn-dynamicstack Emit a warning if the function calls "alloca" or uses dynamically-sized arrays. This is generally a bad idea with a limited stack size. -mstack-guard=stack-guard -mstack-size=stack-size If these options are provided the S/390 back end emits additional instructions in the function prologue that trigger a trap if the stack size is stack-guard bytes above the stack-size (remember that the stack on S/390 grows downward). If the stack-guard option is omitted the smallest power of 2 larger than the frame size of the compiled function is chosen. These options are intended to be used to help debugging stack overflow problems. The additionally emitted code causes only little overhead and hence can also be used in production-like systems without greater performance degradation. The given values have to be exact powers of 2 and stack-size has to be greater than stack-guard without exceeding 64k. In order to be efficient the extra code makes the assumption that the stack starts at an address aligned to the value given by stack-size. The stack-guard option can only be used in conjunction with stack-size. -mhotpatch=pre-halfwords,post-halfwords If the hotpatch option is enabled, a "hot-patching" function prologue is generated for all functions in the compilation unit. The funtion label is prepended with the given number of two-byte NOP instructions (pre-halfwords, maximum 1000000). After the label, 2 * post-halfwords bytes are appended, using the largest NOP like instructions the architecture allows (maximum 1000000). If both arguments are zero, hotpatching is disabled. This option can be overridden for individual functions with the "hotpatch" attribute. Score Options These options are defined for Score implementations: -meb Compile code for big-endian mode. This is the default. -mel Compile code for little-endian mode. -mnhwloop Disable generation of "bcnz" instructions. -muls Enable generation of unaligned load and store instructions. -mmac Enable the use of multiply-accumulate instructions. Disabled by default. -mscore5 Specify the SCORE5 as the target architecture. -mscore5u Specify the SCORE5U of the target architecture. -mscore7 Specify the SCORE7 as the target architecture. This is the default. -mscore7d Specify the SCORE7D as the target architecture. SH Options These -m options are defined for the SH implementations: -m1 Generate code for the SH1. -m2 Generate code for the SH2. -m2e Generate code for the SH2e. -m2a-nofpu Generate code for the SH2a without FPU, or for a SH2a-FPU in such a way that the floating-point unit is not used. -m2a-single-only Generate code for the SH2a-FPU, in such a way that no double- precision floating-point operations are used. -m2a-single Generate code for the SH2a-FPU assuming the floating-point unit is in single-precision mode by default. -m2a Generate code for the SH2a-FPU assuming the floating-point unit is in double-precision mode by default. -m3 Generate code for the SH3. -m3e Generate code for the SH3e. -m4-nofpu Generate code for the SH4 without a floating-point unit. -m4-single-only Generate code for the SH4 with a floating-point unit that only supports single-precision arithmetic. -m4-single Generate code for the SH4 assuming the floating-point unit is in single-precision mode by default. -m4 Generate code for the SH4. -m4-100 Generate code for SH4-100. -m4-100-nofpu Generate code for SH4-100 in such a way that the floating- point unit is not used. -m4-100-single Generate code for SH4-100 assuming the floating-point unit is in single-precision mode by default. -m4-100-single-only Generate code for SH4-100 in such a way that no double- precision floating-point operations are used. -m4-200 Generate code for SH4-200. -m4-200-nofpu Generate code for SH4-200 without in such a way that the floating-point unit is not used. -m4-200-single Generate code for SH4-200 assuming the floating-point unit is in single-precision mode by default. -m4-200-single-only Generate code for SH4-200 in such a way that no double- precision floating-point operations are used. -m4-300 Generate code for SH4-300. -m4-300-nofpu Generate code for SH4-300 without in such a way that the floating-point unit is not used. -m4-300-single Generate code for SH4-300 in such a way that no double- precision floating-point operations are used. -m4-300-single-only Generate code for SH4-300 in such a way that no double- precision floating-point operations are used. -m4-340 Generate code for SH4-340 (no MMU, no FPU). -m4-500 Generate code for SH4-500 (no FPU). Passes -isa=sh4-nofpu to the assembler. -m4a-nofpu Generate code for the SH4al-dsp, or for a SH4a in such a way that the floating-point unit is not used. -m4a-single-only Generate code for the SH4a, in such a way that no double- precision floating-point operations are used. -m4a-single Generate code for the SH4a assuming the floating-point unit is in single-precision mode by default. -m4a Generate code for the SH4a. -m4al Same as -m4a-nofpu, except that it implicitly passes -dsp to the assembler. GCC doesn't generate any DSP instructions at the moment. -mb Compile code for the processor in big-endian mode. -ml Compile code for the processor in little-endian mode. -mdalign Align doubles at 64-bit boundaries. Note that this changes the calling conventions, and thus some functions from the standard C library do not work unless you recompile it first with -mdalign. -mrelax Shorten some address references at link time, when possible; uses the linker option -relax. -mbigtable Use 32-bit offsets in "switch" tables. The default is to use 16-bit offsets. -mbitops Enable the use of bit manipulation instructions on SH2A. -mfmovd Enable the use of the instruction "fmovd". Check -mdalign for alignment constraints. -mrenesas Comply with the calling conventions defined by Renesas. -mno-renesas Comply with the calling conventions defined for GCC before the Renesas conventions were available. This option is the default for all targets of the SH toolchain. -mnomacsave Mark the "MAC" register as call-clobbered, even if -mrenesas is given. -mieee -mno-ieee Control the IEEE compliance of floating-point comparisons, which affects the handling of cases where the result of a comparison is unordered. By default -mieee is implicitly enabled. If -ffinite-math-only is enabled -mno-ieee is implicitly set, which results in faster floating-point greater-equal and less-equal comparisons. The implicit settings can be overridden by specifying either -mieee or -mno-ieee. -minline-ic_invalidate Inline code to invalidate instruction cache entries after setting up nested function trampolines. This option has no effect if -musermode is in effect and the selected code generation option (e.g. -m4) does not allow the use of the "icbi" instruction. If the selected code generation option does not allow the use of the "icbi" instruction, and -musermode is not in effect, the inlined code manipulates the instruction cache address array directly with an associative write. This not only requires privileged mode at run time, but it also fails if the cache line had been mapped via the TLB and has become unmapped. -misize Dump instruction size and location in the assembly code. -mpadstruct This option is deprecated. It pads structures to multiple of 4 bytes, which is incompatible with the SH ABI. -matomic-model=model Sets the model of atomic operations and additional parameters as a comma separated list. For details on the atomic built- in functions see __atomic Builtins. The following models and parameters are supported: none Disable compiler generated atomic sequences and emit library calls for atomic operations. This is the default if the target is not "sh*-*-linux*". soft-gusa Generate GNU/Linux compatible gUSA software atomic sequences for the atomic built-in functions. The generated atomic sequences require additional support from the interrupt/exception handling code of the system and are only suitable for SH3* and SH4* single-core systems. This option is enabled by default when the target is "sh*-*-linux*" and SH3* or SH4*. When the target is SH4A, this option also partially utilizes the hardware atomic instructions "movli.l" and "movco.l" to create more efficient code, unless strict is specified. soft-tcb Generate software atomic sequences that use a variable in the thread control block. This is a variation of the gUSA sequences which can also be used on SH1* and SH2* targets. The generated atomic sequences require additional support from the interrupt/exception handling code of the system and are only suitable for single-core systems. When using this model, the gbr-offset= parameter has to be specified as well. soft-imask Generate software atomic sequences that temporarily disable interrupts by setting "SR.IMASK = 1111". This model works only when the program runs in privileged mode and is only suitable for single-core systems. Additional support from the interrupt/exception handling code of the system is not required. This model is enabled by default when the target is "sh*-*-linux*" and SH1* or SH2*. hard-llcs Generate hardware atomic sequences using the "movli.l" and "movco.l" instructions only. This is only available on SH4A and is suitable for multi-core systems. Since the hardware instructions support only 32 bit atomic variables access to 8 or 16 bit variables is emulated with 32 bit accesses. Code compiled with this option is also compatible with other software atomic model interrupt/exception handling systems if executed on an SH4A system. Additional support from the interrupt/exception handling code of the system is not required for this model. gbr-offset= This parameter specifies the offset in bytes of the variable in the thread control block structure that should be used by the generated atomic sequences when the soft-tcb model has been selected. For other models this parameter is ignored. The specified value must be an integer multiple of four and in the range 0-1020. strict This parameter prevents mixed usage of multiple atomic models, even if they are compatible, and makes the compiler generate atomic sequences of the specified model only. -mtas Generate the "tas.b" opcode for "__atomic_test_and_set". Notice that depending on the particular hardware and software configuration this can degrade overall performance due to the operand cache line flushes that are implied by the "tas.b" instruction. On multi-core SH4A processors the "tas.b" instruction must be used with caution since it can result in data corruption for certain cache configurations. -mprefergot When generating position-independent code, emit function calls using the Global Offset Table instead of the Procedure Linkage Table. -musermode -mno-usermode Don't allow (allow) the compiler generating privileged mode code. Specifying -musermode also implies -mno-inline-ic_invalidate if the inlined code would not work in user mode. -musermode is the default when the target is "sh*-*-linux*". If the target is SH1* or SH2* -musermode has no effect, since there is no user mode. -multcost=number Set the cost to assume for a multiply insn. -mdiv=strategy Set the division strategy to be used for integer division operations. strategy can be one of: call-div1 Calls a library function that uses the single-step division instruction "div1" to perform the operation. Division by zero calculates an unspecified result and does not trap. This is the default except for SH4, SH2A and SHcompact. call-fp Calls a library function that performs the operation in double precision floating point. Division by zero causes a floating-point exception. This is the default for SHcompact with FPU. Specifying this for targets that do not have a double precision FPU defaults to "call-div1". call-table Calls a library function that uses a lookup table for small divisors and the "div1" instruction with case distinction for larger divisors. Division by zero calculates an unspecified result and does not trap. This is the default for SH4. Specifying this for targets that do not have dynamic shift instructions defaults to "call-div1". When a division strategy has not been specified the default strategy is selected based on the current target. For SH2A the default strategy is to use the "divs" and "divu" instructions instead of library function calls. -maccumulate-outgoing-args Reserve space once for outgoing arguments in the function prologue rather than around each call. Generally beneficial for performance and size. Also needed for unwinding to avoid changing the stack frame around conditional code. -mdivsi3_libfunc=name Set the name of the library function used for 32-bit signed division to name. This only affects the name used in the call division strategies, and the compiler still expects the same sets of input/output/clobbered registers as if this option were not present. -mfixed-range=register-range Generate code treating the given register range as fixed registers. A fixed register is one that the register allocator cannot use. This is useful when compiling kernel code. A register range is specified as two registers separated by a dash. Multiple register ranges can be specified separated by a comma. -mbranch-cost=num Assume num to be the cost for a branch instruction. Higher numbers make the compiler try to generate more branch-free code if possible. If not specified the value is selected depending on the processor type that is being compiled for. -mzdcbranch -mno-zdcbranch Assume (do not assume) that zero displacement conditional branch instructions "bt" and "bf" are fast. If -mzdcbranch is specified, the compiler prefers zero displacement branch code sequences. This is enabled by default when generating code for SH4 and SH4A. It can be explicitly disabled by specifying -mno-zdcbranch. -mcbranch-force-delay-slot Force the usage of delay slots for conditional branches, which stuffs the delay slot with a "nop" if a suitable instruction cannot be found. By default this option is disabled. It can be enabled to work around hardware bugs as found in the original SH7055. -mfused-madd -mno-fused-madd Generate code that uses (does not use) the floating-point multiply and accumulate instructions. These instructions are generated by default if hardware floating point is used. The machine-dependent -mfused-madd option is now mapped to the machine-independent -ffp-contract=fast option, and -mno-fused-madd is mapped to -ffp-contract=off. -mfsca -mno-fsca Allow or disallow the compiler to emit the "fsca" instruction for sine and cosine approximations. The option -mfsca must be used in combination with -funsafe-math-optimizations. It is enabled by default when generating code for SH4A. Using -mno-fsca disables sine and cosine approximations even if -funsafe-math-optimizations is in effect. -mfsrra -mno-fsrra Allow or disallow the compiler to emit the "fsrra" instruction for reciprocal square root approximations. The option -mfsrra must be used in combination with -funsafe-math-optimizations and -ffinite-math-only. It is enabled by default when generating code for SH4A. Using -mno-fsrra disables reciprocal square root approximations even if -funsafe-math-optimizations and -ffinite-math-only are in effect. -mpretend-cmove Prefer zero-displacement conditional branches for conditional move instruction patterns. This can result in faster code on the SH4 processor. -mfdpic Generate code using the FDPIC ABI. Solaris 2 Options These -m options are supported on Solaris 2: -mclear-hwcap -mclear-hwcap tells the compiler to remove the hardware capabilities generated by the Solaris assembler. This is only necessary when object files use ISA extensions not supported by the current machine, but check at runtime whether or not to use them. -mimpure-text -mimpure-text, used in addition to -shared, tells the compiler to not pass -z text to the linker when linking a shared object. Using this option, you can link position- dependent code into a shared object. -mimpure-text suppresses the "relocations remain against allocatable but non-writable sections" linker error message. However, the necessary relocations trigger copy-on-write, and the shared object is not actually shared across processes. Instead of using -mimpure-text, you should compile all source code with -fpic or -fPIC. These switches are supported in addition to the above on Solaris 2: -pthreads This is a synonym for -pthread. SPARC Options These -m options are supported on the SPARC: -mno-app-regs -mapp-regs Specify -mapp-regs to generate output using the global registers 2 through 4, which the SPARC SVR4 ABI reserves for applications. Like the global register 1, each global register 2 through 4 is then treated as an allocable register that is clobbered by function calls. This is the default. To be fully SVR4 ABI-compliant at the cost of some performance loss, specify -mno-app-regs. You should compile libraries and system software with this option. -mflat -mno-flat With -mflat, the compiler does not generate save/restore instructions and uses a "flat" or single register window model. This model is compatible with the regular register window model. The local registers and the input registers (0--5) are still treated as "call-saved" registers and are saved on the stack as needed. With -mno-flat (the default), the compiler generates save/restore instructions (except for leaf functions). This is the normal operating mode. -mfpu -mhard-float Generate output containing floating-point instructions. This is the default. -mno-fpu -msoft-float Generate output containing library calls for floating point. Warning: the requisite libraries are not available for all SPARC targets. Normally the facilities of the machine's usual C compiler are used, but this cannot be done directly in cross-compilation. You must make your own arrangements to provide suitable library functions for cross-compilation. The embedded targets sparc-*-aout and sparclite-*-* do provide software floating-point support. -msoft-float changes the calling convention in the output file; therefore, it is only useful if you compile all of a program with this option. In particular, you need to compile libgcc.a, the library that comes with GCC, with -msoft-float in order for this to work. -mhard-quad-float Generate output containing quad-word (long double) floating- point instructions. -msoft-quad-float Generate output containing library calls for quad-word (long double) floating-point instructions. The functions called are those specified in the SPARC ABI. This is the default. As of this writing, there are no SPARC implementations that have hardware support for the quad-word floating-point instructions. They all invoke a trap handler for one of these instructions, and then the trap handler emulates the effect of the instruction. Because of the trap handler overhead, this is much slower than calling the ABI library routines. Thus the -msoft-quad-float option is the default. -mno-unaligned-doubles -munaligned-doubles Assume that doubles have 8-byte alignment. This is the default. With -munaligned-doubles, GCC assumes that doubles have 8-byte alignment only if they are contained in another type, or if they have an absolute address. Otherwise, it assumes they have 4-byte alignment. Specifying this option avoids some rare compatibility problems with code generated by other compilers. It is not the default because it results in a performance loss, especially for floating-point code. -muser-mode -mno-user-mode Do not generate code that can only run in supervisor mode. This is relevant only for the "casa" instruction emitted for the LEON3 processor. This is the default. -mfaster-structs -mno-faster-structs With -mfaster-structs, the compiler assumes that structures should have 8-byte alignment. This enables the use of pairs of "ldd" and "std" instructions for copies in structure assignment, in place of twice as many "ld" and "st" pairs. However, the use of this changed alignment directly violates the SPARC ABI. Thus, it's intended only for use on targets where the developer acknowledges that their resulting code is not directly in line with the rules of the ABI. -mstd-struct-return -mno-std-struct-return With -mstd-struct-return, the compiler generates checking code in functions returning structures or unions to detect size mismatches between the two sides of function calls, as per the 32-bit ABI. The default is -mno-std-struct-return. This option has no effect in 64-bit mode. -mlra -mno-lra Enable Local Register Allocation. This is the default for SPARC since GCC 7 so -mno-lra needs to be passed to get old Reload. -mcpu=cpu_type Set the instruction set, register set, and instruction scheduling parameters for machine type cpu_type. Supported values for cpu_type are v7, cypress, v8, supersparc, hypersparc, leon, leon3, leon3v7, sparclite, f930, f934, sparclite86x, sparclet, tsc701, v9, ultrasparc, ultrasparc3, niagara, niagara2, niagara3, niagara4, niagara7 and m8. Native Solaris and GNU/Linux toolchains also support the value native, which selects the best architecture option for the host processor. -mcpu=native has no effect if GCC does not recognize the processor. Default instruction scheduling parameters are used for values that select an architecture and not an implementation. These are v7, v8, sparclite, sparclet, v9. Here is a list of each supported architecture and their supported implementations. v7 cypress, leon3v7 v8 supersparc, hypersparc, leon, leon3 sparclite f930, f934, sparclite86x sparclet tsc701 v9 ultrasparc, ultrasparc3, niagara, niagara2, niagara3, niagara4, niagara7, m8 By default (unless configured otherwise), GCC generates code for the V7 variant of the SPARC architecture. With -mcpu=cypress, the compiler additionally optimizes it for the Cypress CY7C602 chip, as used in the SPARCStation/SPARCServer 3xx series. This is also appropriate for the older SPARCStation 1, 2, IPX etc. With -mcpu=v8, GCC generates code for the V8 variant of the SPARC architecture. The only difference from V7 code is that the compiler emits the integer multiply and integer divide instructions which exist in SPARC-V8 but not in SPARC-V7. With -mcpu=supersparc, the compiler additionally optimizes it for the SuperSPARC chip, as used in the SPARCStation 10, 1000 and 2000 series. With -mcpu=sparclite, GCC generates code for the SPARClite variant of the SPARC architecture. This adds the integer multiply, integer divide step and scan ("ffs") instructions which exist in SPARClite but not in SPARC-V7. With -mcpu=f930, the compiler additionally optimizes it for the Fujitsu MB86930 chip, which is the original SPARClite, with no FPU. With -mcpu=f934, the compiler additionally optimizes it for the Fujitsu MB86934 chip, which is the more recent SPARClite with FPU. With -mcpu=sparclet, GCC generates code for the SPARClet variant of the SPARC architecture. This adds the integer multiply, multiply/accumulate, integer divide step and scan ("ffs") instructions which exist in SPARClet but not in SPARC-V7. With -mcpu=tsc701, the compiler additionally optimizes it for the TEMIC SPARClet chip. With -mcpu=v9, GCC generates code for the V9 variant of the SPARC architecture. This adds 64-bit integer and floating- point move instructions, 3 additional floating-point condition code registers and conditional move instructions. With -mcpu=ultrasparc, the compiler additionally optimizes it for the Sun UltraSPARC I/II/IIi chips. With -mcpu=ultrasparc3, the compiler additionally optimizes it for the Sun UltraSPARC III/III+/IIIi/IIIi+/IV/IV+ chips. With -mcpu=niagara, the compiler additionally optimizes it for Sun UltraSPARC T1 chips. With -mcpu=niagara2, the compiler additionally optimizes it for Sun UltraSPARC T2 chips. With -mcpu=niagara3, the compiler additionally optimizes it for Sun UltraSPARC T3 chips. With -mcpu=niagara4, the compiler additionally optimizes it for Sun UltraSPARC T4 chips. With -mcpu=niagara7, the compiler additionally optimizes it for Oracle SPARC M7 chips. With -mcpu=m8, the compiler additionally optimizes it for Oracle M8 chips. -mtune=cpu_type Set the instruction scheduling parameters for machine type cpu_type, but do not set the instruction set or register set that the option -mcpu=cpu_type does. The same values for -mcpu=cpu_type can be used for -mtune=cpu_type, but the only useful values are those that select a particular CPU implementation. Those are cypress, supersparc, hypersparc, leon, leon3, leon3v7, f930, f934, sparclite86x, tsc701, ultrasparc, ultrasparc3, niagara, niagara2, niagara3, niagara4, niagara7 and m8. With native Solaris and GNU/Linux toolchains, native can also be used. -mv8plus -mno-v8plus With -mv8plus, GCC generates code for the SPARC-V8+ ABI. The difference from the V8 ABI is that the global and out registers are considered 64 bits wide. This is enabled by default on Solaris in 32-bit mode for all SPARC-V9 processors. -mvis -mno-vis With -mvis, GCC generates code that takes advantage of the UltraSPARC Visual Instruction Set extensions. The default is -mno-vis. -mvis2 -mno-vis2 With -mvis2, GCC generates code that takes advantage of version 2.0 of the UltraSPARC Visual Instruction Set extensions. The default is -mvis2 when targeting a cpu that supports such instructions, such as UltraSPARC-III and later. Setting -mvis2 also sets -mvis. -mvis3 -mno-vis3 With -mvis3, GCC generates code that takes advantage of version 3.0 of the UltraSPARC Visual Instruction Set extensions. The default is -mvis3 when targeting a cpu that supports such instructions, such as niagara-3 and later. Setting -mvis3 also sets -mvis2 and -mvis. -mvis4 -mno-vis4 With -mvis4, GCC generates code that takes advantage of version 4.0 of the UltraSPARC Visual Instruction Set extensions. The default is -mvis4 when targeting a cpu that supports such instructions, such as niagara-7 and later. Setting -mvis4 also sets -mvis3, -mvis2 and -mvis. -mvis4b -mno-vis4b With -mvis4b, GCC generates code that takes advantage of version 4.0 of the UltraSPARC Visual Instruction Set extensions, plus the additional VIS instructions introduced in the Oracle SPARC Architecture 2017. The default is -mvis4b when targeting a cpu that supports such instructions, such as m8 and later. Setting -mvis4b also sets -mvis4, -mvis3, -mvis2 and -mvis. -mcbcond -mno-cbcond With -mcbcond, GCC generates code that takes advantage of the UltraSPARC Compare-and-Branch-on-Condition instructions. The default is -mcbcond when targeting a CPU that supports such instructions, such as Niagara-4 and later. -mfmaf -mno-fmaf With -mfmaf, GCC generates code that takes advantage of the UltraSPARC Fused Multiply-Add Floating-point instructions. The default is -mfmaf when targeting a CPU that supports such instructions, such as Niagara-3 and later. -mfsmuld -mno-fsmuld With -mfsmuld, GCC generates code that takes advantage of the Floating-point Multiply Single to Double (FsMULd) instruction. The default is -mfsmuld when targeting a CPU supporting the architecture versions V8 or V9 with FPU except -mcpu=leon. -mpopc -mno-popc With -mpopc, GCC generates code that takes advantage of the UltraSPARC Population Count instruction. The default is -mpopc when targeting a CPU that supports such an instruction, such as Niagara-2 and later. -msubxc -mno-subxc With -msubxc, GCC generates code that takes advantage of the UltraSPARC Subtract-Extended-with-Carry instruction. The default is -msubxc when targeting a CPU that supports such an instruction, such as Niagara-7 and later. -mfix-at697f Enable the documented workaround for the single erratum of the Atmel AT697F processor (which corresponds to erratum #13 of the AT697E processor). -mfix-ut699 Enable the documented workarounds for the floating-point errata and the data cache nullify errata of the UT699 processor. -mfix-ut700 Enable the documented workaround for the back-to-back store errata of the UT699E/UT700 processor. -mfix-gr712rc Enable the documented workaround for the back-to-back store errata of the GR712RC processor. These -m options are supported in addition to the above on SPARC-V9 processors in 64-bit environments: -m32 -m64 Generate code for a 32-bit or 64-bit environment. The 32-bit environment sets int, long and pointer to 32 bits. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits. -mcmodel=which Set the code model to one of medlow The Medium/Low code model: 64-bit addresses, programs must be linked in the low 32 bits of memory. Programs can be statically or dynamically linked. medmid The Medium/Middle code model: 64-bit addresses, programs must be linked in the low 44 bits of memory, the text and data segments must be less than 2GB in size and the data segment must be located within 2GB of the text segment. medany The Medium/Anywhere code model: 64-bit addresses, programs may be linked anywhere in memory, the text and data segments must be less than 2GB in size and the data segment must be located within 2GB of the text segment. embmedany The Medium/Anywhere code model for embedded systems: 64-bit addresses, the text and data segments must be less than 2GB in size, both starting anywhere in memory (determined at link time). The global register %g4 points to the base of the data segment. Programs are statically linked and PIC is not supported. -mmemory-model=mem-model Set the memory model in force on the processor to one of default The default memory model for the processor and operating system. rmo Relaxed Memory Order pso Partial Store Order tso Total Store Order sc Sequential Consistency These memory models are formally defined in Appendix D of the SPARC-V9 architecture manual, as set in the processor's "PSTATE.MM" field. -mstack-bias -mno-stack-bias With -mstack-bias, GCC assumes that the stack pointer, and frame pointer if present, are offset by -2047 which must be added back when making stack frame references. This is the default in 64-bit mode. Otherwise, assume no such offset is present. SPU Options These -m options are supported on the SPU: -mwarn-reloc -merror-reloc The loader for SPU does not handle dynamic relocations. By default, GCC gives an error when it generates code that requires a dynamic relocation. -mno-error-reloc disables the error, -mwarn-reloc generates a warning instead. -msafe-dma -munsafe-dma Instructions that initiate or test completion of DMA must not be reordered with respect to loads and stores of the memory that is being accessed. With -munsafe-dma you must use the "volatile" keyword to protect memory accesses, but that can lead to inefficient code in places where the memory is known to not change. Rather than mark the memory as volatile, you can use -msafe-dma to tell the compiler to treat the DMA instructions as potentially affecting all memory. -mbranch-hints By default, GCC generates a branch hint instruction to avoid pipeline stalls for always-taken or probably-taken branches. A hint is not generated closer than 8 instructions away from its branch. There is little reason to disable them, except for debugging purposes, or to make an object a little bit smaller. -msmall-mem -mlarge-mem By default, GCC generates code assuming that addresses are never larger than 18 bits. With -mlarge-mem code is generated that assumes a full 32-bit address. -mstdmain By default, GCC links against startup code that assumes the SPU-style main function interface (which has an unconventional parameter list). With -mstdmain, GCC links your program against startup code that assumes a C99-style interface to "main", including a local copy of "argv" strings. -mfixed-range=register-range Generate code treating the given register range as fixed registers. A fixed register is one that the register allocator cannot use. This is useful when compiling kernel code. A register range is specified as two registers separated by a dash. Multiple register ranges can be specified separated by a comma. -mea32 -mea64 Compile code assuming that pointers to the PPU address space accessed via the "__ea" named address space qualifier are either 32 or 64 bits wide. The default is 32 bits. As this is an ABI-changing option, all object code in an executable must be compiled with the same setting. -maddress-space-conversion -mno-address-space-conversion Allow/disallow treating the "__ea" address space as superset of the generic address space. This enables explicit type casts between "__ea" and generic pointer as well as implicit conversions of generic pointers to "__ea" pointers. The default is to allow address space pointer conversions. -mcache-size=cache-size This option controls the version of libgcc that the compiler links to an executable and selects a software-managed cache for accessing variables in the "__ea" address space with a particular cache size. Possible options for cache-size are 8, 16, 32, 64 and 128. The default cache size is 64KB. -matomic-updates -mno-atomic-updates This option controls the version of libgcc that the compiler links to an executable and selects whether atomic updates to the software-managed cache of PPU-side variables are used. If you use atomic updates, changes to a PPU variable from SPU code using the "__ea" named address space qualifier do not interfere with changes to other PPU variables residing in the same cache line from PPU code. If you do not use atomic updates, such interference may occur; however, writing back cache lines is more efficient. The default behavior is to use atomic updates. -mdual-nops -mdual-nops=n By default, GCC inserts NOPs to increase dual issue when it expects it to increase performance. n can be a value from 0 to 10. A smaller n inserts fewer NOPs. 10 is the default, 0 is the same as -mno-dual-nops. Disabled with -Os. -mhint-max-nops=n Maximum number of NOPs to insert for a branch hint. A branch hint must be at least 8 instructions away from the branch it is affecting. GCC inserts up to n NOPs to enforce this, otherwise it does not generate the branch hint. -mhint-max-distance=n The encoding of the branch hint instruction limits the hint to be within 256 instructions of the branch it is affecting. By default, GCC makes sure it is within 125. -msafe-hints Work around a hardware bug that causes the SPU to stall indefinitely. By default, GCC inserts the "hbrp" instruction to make sure this stall won't happen. Options for System V These additional options are available on System V Release 4 for compatibility with other compilers on those systems: -G Create a shared object. It is recommended that -symbolic or -shared be used instead. -Qy Identify the versions of each tool used by the compiler, in a ".ident" assembler directive in the output. -Qn Refrain from adding ".ident" directives to the output file (this is the default). -YP,dirs Search the directories dirs, and no others, for libraries specified with -l. -Ym,dir Look in the directory dir to find the M4 preprocessor. The assembler uses this option. TILE-Gx Options These -m options are supported on the TILE-Gx: -mcmodel=small Generate code for the small model. The distance for direct calls is limited to 500M in either direction. PC-relative addresses are 32 bits. Absolute addresses support the full address range. -mcmodel=large Generate code for the large model. There is no limitation on call distance, pc-relative addresses, or absolute addresses. -mcpu=name Selects the type of CPU to be targeted. Currently the only supported type is tilegx. -m32 -m64 Generate code for a 32-bit or 64-bit environment. The 32-bit environment sets int, long, and pointer to 32 bits. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits. -mbig-endian -mlittle-endian Generate code in big/little endian mode, respectively. TILEPro Options These -m options are supported on the TILEPro: -mcpu=name Selects the type of CPU to be targeted. Currently the only supported type is tilepro. -m32 Generate code for a 32-bit environment, which sets int, long, and pointer to 32 bits. This is the only supported behavior so the flag is essentially ignored. V850 Options These -m options are defined for V850 implementations: -mlong-calls -mno-long-calls Treat all calls as being far away (near). If calls are assumed to be far away, the compiler always loads the function's address into a register, and calls indirect through the pointer. -mno-ep -mep Do not optimize (do optimize) basic blocks that use the same index pointer 4 or more times to copy pointer into the "ep" register, and use the shorter "sld" and "sst" instructions. The -mep option is on by default if you optimize. -mno-prolog-function -mprolog-function Do not use (do use) external functions to save and restore registers at the prologue and epilogue of a function. The external functions are slower, but use less code space if more than one function saves the same number of registers. The -mprolog-function option is on by default if you optimize. -mspace Try to make the code as small as possible. At present, this just turns on the -mep and -mprolog-function options. -mtda=n Put static or global variables whose size is n bytes or less into the tiny data area that register "ep" points to. The tiny data area can hold up to 256 bytes in total (128 bytes for byte references). -msda=n Put static or global variables whose size is n bytes or less into the small data area that register "gp" points to. The small data area can hold up to 64 kilobytes. -mzda=n Put static or global variables whose size is n bytes or less into the first 32 kilobytes of memory. -mv850 Specify that the target processor is the V850. -mv850e3v5 Specify that the target processor is the V850E3V5. The preprocessor constant "__v850e3v5__" is defined if this option is used. -mv850e2v4 Specify that the target processor is the V850E3V5. This is an alias for the -mv850e3v5 option. -mv850e2v3 Specify that the target processor is the V850E2V3. The preprocessor constant "__v850e2v3__" is defined if this option is used. -mv850e2 Specify that the target processor is the V850E2. The preprocessor constant "__v850e2__" is defined if this option is used. -mv850e1 Specify that the target processor is the V850E1. The preprocessor constants "__v850e1__" and "__v850e__" are defined if this option is used. -mv850es Specify that the target processor is the V850ES. This is an alias for the -mv850e1 option. -mv850e Specify that the target processor is the V850E. The preprocessor constant "__v850e__" is defined if this option is used. If neither -mv850 nor -mv850e nor -mv850e1 nor -mv850e2 nor -mv850e2v3 nor -mv850e3v5 are defined then a default target processor is chosen and the relevant __v850*__ preprocessor constant is defined. The preprocessor constants "__v850" and "__v851__" are always defined, regardless of which processor variant is the target. -mdisable-callt -mno-disable-callt This option suppresses generation of the "CALLT" instruction for the v850e, v850e1, v850e2, v850e2v3 and v850e3v5 flavors of the v850 architecture. This option is enabled by default when the RH850 ABI is in use (see -mrh850-abi), and disabled by default when the GCC ABI is in use. If "CALLT" instructions are being generated then the C preprocessor symbol "__V850_CALLT__" is defined. -mrelax -mno-relax Pass on (or do not pass on) the -mrelax command-line option to the assembler. -mlong-jumps -mno-long-jumps Disable (or re-enable) the generation of PC-relative jump instructions. -msoft-float -mhard-float Disable (or re-enable) the generation of hardware floating point instructions. This option is only significant when the target architecture is V850E2V3 or higher. If hardware floating point instructions are being generated then the C preprocessor symbol "__FPU_OK__" is defined, otherwise the symbol "__NO_FPU__" is defined. -mloop Enables the use of the e3v5 LOOP instruction. The use of this instruction is not enabled by default when the e3v5 architecture is selected because its use is still experimental. -mrh850-abi -mghs Enables support for the RH850 version of the V850 ABI. This is the default. With this version of the ABI the following rules apply: * Integer sized structures and unions are returned via a memory pointer rather than a register. * Large structures and unions (more than 8 bytes in size) are passed by value. * Functions are aligned to 16-bit boundaries. * The -m8byte-align command-line option is supported. * The -mdisable-callt command-line option is enabled by default. The -mno-disable-callt command-line option is not supported. When this version of the ABI is enabled the C preprocessor symbol "__V850_RH850_ABI__" is defined. -mgcc-abi Enables support for the old GCC version of the V850 ABI. With this version of the ABI the following rules apply: * Integer sized structures and unions are returned in register "r10". * Large structures and unions (more than 8 bytes in size) are passed by reference. * Functions are aligned to 32-bit boundaries, unless optimizing for size. * The -m8byte-align command-line option is not supported. * The -mdisable-callt command-line option is supported but not enabled by default. When this version of the ABI is enabled the C preprocessor symbol "__V850_GCC_ABI__" is defined. -m8byte-align -mno-8byte-align Enables support for "double" and "long long" types to be aligned on 8-byte boundaries. The default is to restrict the alignment of all objects to at most 4-bytes. When -m8byte-align is in effect the C preprocessor symbol "__V850_8BYTE_ALIGN__" is defined. -mbig-switch Generate code suitable for big switch tables. Use this option only if the assembler/linker complain about out of range branches within a switch table. -mapp-regs This option causes r2 and r5 to be used in the code generated by the compiler. This setting is the default. -mno-app-regs This option causes r2 and r5 to be treated as fixed registers. VAX Options These -m options are defined for the VAX: -munix Do not output certain jump instructions ("aobleq" and so on) that the Unix assembler for the VAX cannot handle across long ranges. -mgnu Do output those jump instructions, on the assumption that the GNU assembler is being used. -mg Output code for G-format floating-point numbers instead of D-format. Visium Options -mdebug A program which performs file I/O and is destined to run on an MCM target should be linked with this option. It causes the libraries libc.a and libdebug.a to be linked. The program should be run on the target under the control of the GDB remote debugging stub. -msim A program which performs file I/O and is destined to run on the simulator should be linked with option. This causes libraries libc.a and libsim.a to be linked. -mfpu -mhard-float Generate code containing floating-point instructions. This is the default. -mno-fpu -msoft-float Generate code containing library calls for floating-point. -msoft-float changes the calling convention in the output file; therefore, it is only useful if you compile all of a program with this option. In particular, you need to compile libgcc.a, the library that comes with GCC, with -msoft-float in order for this to work. -mcpu=cpu_type Set the instruction set, register set, and instruction scheduling parameters for machine type cpu_type. Supported values for cpu_type are mcm, gr5 and gr6. mcm is a synonym of gr5 present for backward compatibility. By default (unless configured otherwise), GCC generates code for the GR5 variant of the Visium architecture. With -mcpu=gr6, GCC generates code for the GR6 variant of the Visium architecture. The only difference from GR5 code is that the compiler will generate block move instructions. -mtune=cpu_type Set the instruction scheduling parameters for machine type cpu_type, but do not set the instruction set or register set that the option -mcpu=cpu_type would. -msv-mode Generate code for the supervisor mode, where there are no restrictions on the access to general registers. This is the default. -muser-mode Generate code for the user mode, where the access to some general registers is forbidden: on the GR5, registers r24 to r31 cannot be accessed in this mode; on the GR6, only registers r29 to r31 are affected. VMS Options These -m options are defined for the VMS implementations: -mvms-return-codes Return VMS condition codes from "main". The default is to return POSIX-style condition (e.g. error) codes. -mdebug-main=prefix Flag the first routine whose name starts with prefix as the main routine for the debugger. -mmalloc64 Default to 64-bit memory allocation routines. -mpointer-size=size Set the default size of pointers. Possible options for size are 32 or short for 32 bit pointers, 64 or long for 64 bit pointers, and no for supporting only 32 bit pointers. The later option disables "pragma pointer_size". VxWorks Options The options in this section are defined for all VxWorks targets. Options specific to the target hardware are listed with the other options for that target. -mrtp GCC can generate code for both VxWorks kernels and real time processes (RTPs). This option switches from the former to the latter. It also defines the preprocessor macro "__RTP__". -non-static Link an RTP executable against shared libraries rather than static libraries. The options -static and -shared can also be used for RTPs; -static is the default. -Bstatic -Bdynamic These options are passed down to the linker. They are defined for compatibility with Diab. -Xbind-lazy Enable lazy binding of function calls. This option is equivalent to -Wl,-z,now and is defined for compatibility with Diab. -Xbind-now Disable lazy binding of function calls. This option is the default and is defined for compatibility with Diab. x86 Options These -m options are defined for the x86 family of computers. -march=cpu-type Generate instructions for the machine type cpu-type. In contrast to -mtune=cpu-type, which merely tunes the generated code for the specified cpu-type, -march=cpu-type allows GCC to generate code that may not run at all on processors other than the one indicated. Specifying -march=cpu-type implies -mtune=cpu-type. The choices for cpu-type are: native This selects the CPU to generate code for at compilation time by determining the processor type of the compiling machine. Using -march=native enables all instruction subsets supported by the local machine (hence the result might not run on different machines). Using -mtune=native produces code optimized for the local machine under the constraints of the selected instruction set. x86-64 A generic CPU with 64-bit extensions. i386 Original Intel i386 CPU. i486 Intel i486 CPU. (No scheduling is implemented for this chip.) i586 pentium Intel Pentium CPU with no MMX support. lakemont Intel Lakemont MCU, based on Intel Pentium CPU. pentium-mmx Intel Pentium MMX CPU, based on Pentium core with MMX instruction set support. pentiumpro Intel Pentium Pro CPU. i686 When used with -march, the Pentium Pro instruction set is used, so the code runs on all i686 family chips. When used with -mtune, it has the same meaning as generic. pentium2 Intel Pentium II CPU, based on Pentium Pro core with MMX instruction set support. pentium3 pentium3m Intel Pentium III CPU, based on Pentium Pro core with MMX and SSE instruction set support. pentium-m Intel Pentium M; low-power version of Intel Pentium III CPU with MMX, SSE and SSE2 instruction set support. Used by Centrino notebooks. pentium4 pentium4m Intel Pentium 4 CPU with MMX, SSE and SSE2 instruction set support. prescott Improved version of Intel Pentium 4 CPU with MMX, SSE, SSE2 and SSE3 instruction set support. nocona Improved version of Intel Pentium 4 CPU with 64-bit extensions, MMX, SSE, SSE2 and SSE3 instruction set support. core2 Intel Core 2 CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. nehalem Intel Nehalem CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2 and POPCNT instruction set support. westmere Intel Westmere CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES and PCLMUL instruction set support. sandybridge Intel Sandy Bridge CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AES and PCLMUL instruction set support. ivybridge Intel Ivy Bridge CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AES, PCLMUL, FSGSBASE, RDRND and F16C instruction set support. haswell Intel Haswell CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2 and F16C instruction set support. broadwell Intel Broadwell CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED ADCX and PREFETCHW instruction set support. skylake Intel Skylake CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC and XSAVES instruction set support. bonnell Intel Bonnell CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. silvermont Intel Silvermont CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES, PREFETCHW, PCLMUL and RDRND instruction set support. goldmont Intel Goldmont CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES, PREFETCHW, PCLMUL, RDRND, XSAVE, XSAVEC, XSAVES, XSAVEOPT and FSGSBASE instruction set support. goldmont-plus Intel Goldmont Plus CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES, PREFETCHW, PCLMUL, RDRND, XSAVE, XSAVEC, XSAVES, XSAVEOPT, FSGSBASE, PTWRITE, RDPID, SGX and UMIP instruction set support. tremont Intel Tremont CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AES, PREFETCHW, PCLMUL, RDRND, XSAVE, XSAVEC, XSAVES, XSAVEOPT, FSGSBASE, PTWRITE, RDPID, SGX, UMIP, GFNI-SSE, CLWB, MOVDIRI, MOVDIR64B, CLDEMOTE and WAITPKG instruction set support. knl Intel Knight's Landing CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, PREFETCHWT1, AVX512F, AVX512PF, AVX512ER and AVX512CD instruction set support. knm Intel Knights Mill CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, PREFETCHWT1, AVX512F, AVX512PF, AVX512ER, AVX512CD, AVX5124VNNIW, AVX5124FMAPS and AVX512VPOPCNTDQ instruction set support. skylake-avx512 Intel Skylake Server CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, CLWB, AVX512VL, AVX512BW, AVX512DQ and AVX512CD instruction set support. cannonlake Intel Cannonlake Server CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512VBMI, AVX512IFMA, SHA and UMIP instruction set support. icelake-client Intel Icelake Client CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512VBMI, AVX512IFMA, SHA, CLWB, UMIP, RDPID, GFNI, AVX512VBMI2, AVX512VPOPCNTDQ, AVX512BITALG, AVX512VNNI, VPCLMULQDQ, VAES instruction set support. icelake-server Intel Icelake Server CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512VBMI, AVX512IFMA, SHA, CLWB, UMIP, RDPID, GFNI, AVX512VBMI2, AVX512VPOPCNTDQ, AVX512BITALG, AVX512VNNI, VPCLMULQDQ, VAES, PCONFIG and WBNOINVD instruction set support. cascadelake Intel Cascadelake CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, CLWB, AVX512VL, AVX512BW, AVX512DQ, AVX512CD and AVX512VNNI instruction set support. tigerlake Intel Tigerlake CPU with 64-bit extensions, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, PKU, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA, BMI, BMI2, F16C, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVEC, XSAVES, AVX512F, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512VBMI, AVX512IFMA, SHA, CLWB, UMIP, RDPID, GFNI, AVX512VBMI2, AVX512VPOPCNTDQ, AVX512BITALG, AVX512VNNI, VPCLMULQDQ, VAES, PCONFIG, WBNOINVD, MOVDIRI, MOVDIR64B and CLWB instruction set support. k6 AMD K6 CPU with MMX instruction set support. k6-2 k6-3 Improved versions of AMD K6 CPU with MMX and 3DNow! instruction set support. athlon athlon-tbird AMD Athlon CPU with MMX, 3dNOW!, enhanced 3DNow! and SSE prefetch instructions support. athlon-4 athlon-xp athlon-mp Improved AMD Athlon CPU with MMX, 3DNow!, enhanced 3DNow! and full SSE instruction set support. k8 opteron athlon64 athlon-fx Processors based on the AMD K8 core with x86-64 instruction set support, including the AMD Opteron, Athlon 64, and Athlon 64 FX processors. (This supersets MMX, SSE, SSE2, 3DNow!, enhanced 3DNow! and 64-bit instruction set extensions.) k8-sse3 opteron-sse3 athlon64-sse3 Improved versions of AMD K8 cores with SSE3 instruction set support. amdfam10 barcelona CPUs based on AMD Family 10h cores with x86-64 instruction set support. (This supersets MMX, SSE, SSE2, SSE3, SSE4A, 3DNow!, enhanced 3DNow!, ABM and 64-bit instruction set extensions.) bdver1 CPUs based on AMD Family 15h cores with x86-64 instruction set support. (This supersets FMA4, AVX, XOP, LWP, AES, PCL_MUL, CX16, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM and 64-bit instruction set extensions.) bdver2 AMD Family 15h core based CPUs with x86-64 instruction set support. (This supersets BMI, TBM, F16C, FMA, FMA4, AVX, XOP, LWP, AES, PCL_MUL, CX16, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM and 64-bit instruction set extensions.) bdver3 AMD Family 15h core based CPUs with x86-64 instruction set support. (This supersets BMI, TBM, F16C, FMA, FMA4, FSGSBASE, AVX, XOP, LWP, AES, PCL_MUL, CX16, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM and 64-bit instruction set extensions. bdver4 AMD Family 15h core based CPUs with x86-64 instruction set support. (This supersets BMI, BMI2, TBM, F16C, FMA, FMA4, FSGSBASE, AVX, AVX2, XOP, LWP, AES, PCL_MUL, CX16, MOVBE, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM and 64-bit instruction set extensions. znver1 AMD Family 17h core based CPUs with x86-64 instruction set support. (This supersets BMI, BMI2, F16C, FMA, FSGSBASE, AVX, AVX2, ADCX, RDSEED, MWAITX, SHA, CLZERO, AES, PCL_MUL, CX16, MOVBE, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM, XSAVEC, XSAVES, CLFLUSHOPT, POPCNT, and 64-bit instruction set extensions. znver2 AMD Family 17h core based CPUs with x86-64 instruction set support. (This supersets BMI, BMI2, ,CLWB, F16C, FMA, FSGSBASE, AVX, AVX2, ADCX, RDSEED, MWAITX, SHA, CLZERO, AES, PCL_MUL, CX16, MOVBE, MMX, SSE, SSE2, SSE3, SSE4A, SSSE3, SSE4.1, SSE4.2, ABM, XSAVEC, XSAVES, CLFLUSHOPT, POPCNT, and 64-bit instruction set extensions.) btver1 CPUs based on AMD Family 14h cores with x86-64 instruction set support. (This supersets MMX, SSE, SSE2, SSE3, SSSE3, SSE4A, CX16, ABM and 64-bit instruction set extensions.) btver2 CPUs based on AMD Family 16h cores with x86-64 instruction set support. This includes MOVBE, F16C, BMI, AVX, PCL_MUL, AES, SSE4.2, SSE4.1, CX16, ABM, SSE4A, SSSE3, SSE3, SSE2, SSE, MMX and 64-bit instruction set extensions. winchip-c6 IDT WinChip C6 CPU, dealt in same way as i486 with additional MMX instruction set support. winchip2 IDT WinChip 2 CPU, dealt in same way as i486 with additional MMX and 3DNow! instruction set support. c3 VIA C3 CPU with MMX and 3DNow! instruction set support. (No scheduling is implemented for this chip.) c3-2 VIA C3-2 (Nehemiah/C5XL) CPU with MMX and SSE instruction set support. (No scheduling is implemented for this chip.) c7 VIA C7 (Esther) CPU with MMX, SSE, SSE2 and SSE3 instruction set support. (No scheduling is implemented for this chip.) samuel-2 VIA Eden Samuel 2 CPU with MMX and 3DNow! instruction set support. (No scheduling is implemented for this chip.) nehemiah VIA Eden Nehemiah CPU with MMX and SSE instruction set support. (No scheduling is implemented for this chip.) esther VIA Eden Esther CPU with MMX, SSE, SSE2 and SSE3 instruction set support. (No scheduling is implemented for this chip.) eden-x2 VIA Eden X2 CPU with x86-64, MMX, SSE, SSE2 and SSE3 instruction set support. (No scheduling is implemented for this chip.) eden-x4 VIA Eden X4 CPU with x86-64, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX and AVX2 instruction set support. (No scheduling is implemented for this chip.) nano Generic VIA Nano CPU with x86-64, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. (No scheduling is implemented for this chip.) nano-1000 VIA Nano 1xxx CPU with x86-64, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. (No scheduling is implemented for this chip.) nano-2000 VIA Nano 2xxx CPU with x86-64, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support. (No scheduling is implemented for this chip.) nano-3000 VIA Nano 3xxx CPU with x86-64, MMX, SSE, SSE2, SSE3, SSSE3 and SSE4.1 instruction set support. (No scheduling is implemented for this chip.) nano-x2 VIA Nano Dual Core CPU with x86-64, MMX, SSE, SSE2, SSE3, SSSE3 and SSE4.1 instruction set support. (No scheduling is implemented for this chip.) nano-x4 VIA Nano Quad Core CPU with x86-64, MMX, SSE, SSE2, SSE3, SSSE3 and SSE4.1 instruction set support. (No scheduling is implemented for this chip.) geode AMD Geode embedded processor with MMX and 3DNow! instruction set support. -mtune=cpu-type Tune to cpu-type everything applicable about the generated code, except for the ABI and the set of available instructions. While picking a specific cpu-type schedules things appropriately for that particular chip, the compiler does not generate any code that cannot run on the default machine type unless you use a -march=cpu-type option. For example, if GCC is configured for i686-pc-linux-gnu then -mtune=pentium4 generates code that is tuned for Pentium 4 but still runs on i686 machines. The choices for cpu-type are the same as for -march. In addition, -mtune supports 2 extra choices for cpu-type: generic Produce code optimized for the most common IA32/AMD64/EM64T processors. If you know the CPU on which your code will run, then you should use the corresponding -mtune or -march option instead of -mtune=generic. But, if you do not know exactly what CPU users of your application will have, then you should use this option. As new processors are deployed in the marketplace, the behavior of this option will change. Therefore, if you upgrade to a newer version of GCC, code generation controlled by this option will change to reflect the processors that are most common at the time that version of GCC is released. There is no -march=generic option because -march indicates the instruction set the compiler can use, and there is no generic instruction set applicable to all processors. In contrast, -mtune indicates the processor (or, in this case, collection of processors) for which the code is optimized. intel Produce code optimized for the most current Intel processors, which are Haswell and Silvermont for this version of GCC. If you know the CPU on which your code will run, then you should use the corresponding -mtune or -march option instead of -mtune=intel. But, if you want your application performs better on both Haswell and Silvermont, then you should use this option. As new Intel processors are deployed in the marketplace, the behavior of this option will change. Therefore, if you upgrade to a newer version of GCC, code generation controlled by this option will change to reflect the most current Intel processors at the time that version of GCC is released. There is no -march=intel option because -march indicates the instruction set the compiler can use, and there is no common instruction set applicable to all processors. In contrast, -mtune indicates the processor (or, in this case, collection of processors) for which the code is optimized. -mcpu=cpu-type A deprecated synonym for -mtune. -mfpmath=unit Generate floating-point arithmetic for selected unit unit. The choices for unit are: 387 Use the standard 387 floating-point coprocessor present on the majority of chips and emulated otherwise. Code compiled with this option runs almost everywhere. The temporary results are computed in 80-bit precision instead of the precision specified by the type, resulting in slightly different results compared to most of other chips. See -ffloat-store for more detailed description. This is the default choice for non-Darwin x86-32 targets. sse Use scalar floating-point instructions present in the SSE instruction set. This instruction set is supported by Pentium III and newer chips, and in the AMD line by Athlon-4, Athlon XP and Athlon MP chips. The earlier version of the SSE instruction set supports only single- precision arithmetic, thus the double and extended- precision arithmetic are still done using 387. A later version, present only in Pentium 4 and AMD x86-64 chips, supports double-precision arithmetic too. For the x86-32 compiler, you must use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default. The resulting code should be considerably faster in the majority of cases and avoid the numerical instability problems of 387 code, but may break some existing code that expects temporaries to be 80 bits. This is the default choice for the x86-64 compiler, Darwin x86-32 targets, and the default choice for x86-32 targets with the SSE2 instruction set when -ffast-math is enabled. sse,387 sse+387 both Attempt to utilize both instruction sets at once. This effectively doubles the amount of available registers, and on chips with separate execution units for 387 and SSE the execution resources too. Use this option with care, as it is still experimental, because the GCC register allocator does not model separate functional units well, resulting in unstable performance. -masm=dialect Output assembly instructions using selected dialect. Also affects which dialect is used for basic "asm" and extended "asm". Supported choices (in dialect order) are att or intel. The default is att. Darwin does not support intel. -mieee-fp -mno-ieee-fp Control whether or not the compiler uses IEEE floating-point comparisons. These correctly handle the case where the result of a comparison is unordered. -m80387 -mhard-float Generate output containing 80387 instructions for floating point. -mno-80387 -msoft-float Generate output containing library calls for floating point. Warning: the requisite libraries are not part of GCC. Normally the facilities of the machine's usual C compiler are used, but this cannot be done directly in cross-compilation. You must make your own arrangements to provide suitable library functions for cross-compilation. On machines where a function returns floating-point results in the 80387 register stack, some floating-point opcodes may be emitted even if -msoft-float is used. -mno-fp-ret-in-387 Do not use the FPU registers for return values of functions. The usual calling convention has functions return values of types "float" and "double" in an FPU register, even if there is no FPU. The idea is that the operating system should emulate an FPU. The option -mno-fp-ret-in-387 causes such values to be returned in ordinary CPU registers instead. -mno-fancy-math-387 Some 387 emulators do not support the "sin", "cos" and "sqrt" instructions for the 387. Specify this option to avoid generating those instructions. This option is overridden when -march indicates that the target CPU always has an FPU and so the instruction does not need emulation. These instructions are not generated unless you also use the -funsafe-math-optimizations switch. -malign-double -mno-align-double Control whether GCC aligns "double", "long double", and "long long" variables on a two-word boundary or a one-word boundary. Aligning "double" variables on a two-word boundary produces code that runs somewhat faster on a Pentium at the expense of more memory. On x86-64, -malign-double is enabled by default. Warning: if you use the -malign-double switch, structures containing the above types are aligned differently than the published application binary interface specifications for the x86-32 and are not binary compatible with structures in code compiled without that switch. -m96bit-long-double -m128bit-long-double These switches control the size of "long double" type. The x86-32 application binary interface specifies the size to be 96 bits, so -m96bit-long-double is the default in 32-bit mode. Modern architectures (Pentium and newer) prefer "long double" to be aligned to an 8- or 16-byte boundary. In arrays or structures conforming to the ABI, this is not possible. So specifying -m128bit-long-double aligns "long double" to a 16-byte boundary by padding the "long double" with an additional 32-bit zero. In the x86-64 compiler, -m128bit-long-double is the default choice as its ABI specifies that "long double" is aligned on 16-byte boundary. Notice that neither of these options enable any extra precision over the x87 standard of 80 bits for a "long double". Warning: if you override the default value for your target ABI, this changes the size of structures and arrays containing "long double" variables, as well as modifying the function calling convention for functions taking "long double". Hence they are not binary-compatible with code compiled without that switch. -mlong-double-64 -mlong-double-80 -mlong-double-128 These switches control the size of "long double" type. A size of 64 bits makes the "long double" type equivalent to the "double" type. This is the default for 32-bit Bionic C library. A size of 128 bits makes the "long double" type equivalent to the "__float128" type. This is the default for 64-bit Bionic C library. Warning: if you override the default value for your target ABI, this changes the size of structures and arrays containing "long double" variables, as well as modifying the function calling convention for functions taking "long double". Hence they are not binary-compatible with code compiled without that switch. -malign-data=type Control how GCC aligns variables. Supported values for type are compat uses increased alignment value compatible uses GCC 4.8 and earlier, abi uses alignment value as specified by the psABI, and cacheline uses increased alignment value to match the cache line size. compat is the default. -mlarge-data-threshold=threshold When -mcmodel=medium is specified, data objects larger than threshold are placed in the large data section. This value must be the same across all objects linked into the binary, and defaults to 65535. -mrtd Use a different function-calling convention, in which functions that take a fixed number of arguments return with the "ret num" instruction, which pops their arguments while returning. This saves one instruction in the caller since there is no need to pop the arguments there. You can specify that an individual function is called with this calling sequence with the function attribute "stdcall". You can also override the -mrtd option by using the function attribute "cdecl". Warning: this calling convention is incompatible with the one normally used on Unix, so you cannot use it if you need to call libraries compiled with the Unix compiler. Also, you must provide function prototypes for all functions that take variable numbers of arguments (including "printf"); otherwise incorrect code is generated for calls to those functions. In addition, seriously incorrect code results if you call a function with too many arguments. (Normally, extra arguments are harmlessly ignored.) -mregparm=num Control how many registers are used to pass integer arguments. By default, no registers are used to pass arguments, and at most 3 registers can be used. You can control this behavior for a specific function by using the function attribute "regparm". Warning: if you use this switch, and num is nonzero, then you must build all modules with the same value, including any libraries. This includes the system libraries and startup modules. -msseregparm Use SSE register passing conventions for float and double arguments and return values. You can control this behavior for a specific function by using the function attribute "sseregparm". Warning: if you use this switch then you must build all modules with the same value, including any libraries. This includes the system libraries and startup modules. -mvect8-ret-in-mem Return 8-byte vectors in memory instead of MMX registers. This is the default on Solaris 8 and 9 and VxWorks to match the ABI of the Sun Studio compilers until version 12. Later compiler versions (starting with Studio 12 Update 1) follow the ABI used by other x86 targets, which is the default on Solaris 10 and later. Only use this option if you need to remain compatible with existing code produced by those previous compiler versions or older versions of GCC. -mpc32 -mpc64 -mpc80 Set 80387 floating-point precision to 32, 64 or 80 bits. When -mpc32 is specified, the significands of results of floating-point operations are rounded to 24 bits (single precision); -mpc64 rounds the significands of results of floating-point operations to 53 bits (double precision) and -mpc80 rounds the significands of results of floating-point operations to 64 bits (extended double precision), which is the default. When this option is used, floating-point operations in higher precisions are not available to the programmer without setting the FPU control word explicitly. Setting the rounding of floating-point operations to less than the default 80 bits can speed some programs by 2% or more. Note that some mathematical libraries assume that extended-precision (80-bit) floating-point operations are enabled by default; routines in such libraries could suffer significant loss of accuracy, typically through so-called "catastrophic cancellation", when this option is used to set the precision to less than extended precision. -mstackrealign Realign the stack at entry. On the x86, the -mstackrealign option generates an alternate prologue and epilogue that realigns the run-time stack if necessary. This supports mixing legacy codes that keep 4-byte stack alignment with modern codes that keep 16-byte stack alignment for SSE compatibility. See also the attribute "force_align_arg_pointer", applicable to individual functions. -mpreferred-stack-boundary=num Attempt to keep the stack boundary aligned to a 2 raised to num byte boundary. If -mpreferred-stack-boundary is not specified, the default is 4 (16 bytes or 128 bits). Warning: When generating code for the x86-64 architecture with SSE extensions disabled, -mpreferred-stack-boundary=3 can be used to keep the stack boundary aligned to 8 byte boundary. Since x86-64 ABI require 16 byte stack alignment, this is ABI incompatible and intended to be used in controlled environment where stack space is important limitation. This option leads to wrong code when functions compiled with 16 byte stack alignment (such as functions from a standard library) are called with misaligned stack. In this case, SSE instructions may lead to misaligned memory access traps. In addition, variable arguments are handled incorrectly for 16 byte aligned objects (including x87 long double and __int128), leading to wrong results. You must build all modules with -mpreferred-stack-boundary=3, including any libraries. This includes the system libraries and startup modules. -mincoming-stack-boundary=num Assume the incoming stack is aligned to a 2 raised to num byte boundary. If -mincoming-stack-boundary is not specified, the one specified by -mpreferred-stack-boundary is used. On Pentium and Pentium Pro, "double" and "long double" values should be aligned to an 8-byte boundary (see -malign-double) or suffer significant run time performance penalties. On Pentium III, the Streaming SIMD Extension (SSE) data type "__m128" may not work properly if it is not 16-byte aligned. To ensure proper alignment of this values on the stack, the stack boundary must be as aligned as that required by any value stored on the stack. Further, every function must be generated such that it keeps the stack aligned. Thus calling a function compiled with a higher preferred stack boundary from a function compiled with a lower preferred stack boundary most likely misaligns the stack. It is recommended that libraries that use callbacks always use the default setting. This extra alignment does consume extra stack space, and generally increases code size. Code that is sensitive to stack space usage, such as embedded systems and operating system kernels, may want to reduce the preferred alignment to -mpreferred-stack-boundary=2. -mmmx -msse -msse2 -msse3 -mssse3 -msse4 -msse4a -msse4.1 -msse4.2 -mavx -mavx2 -mavx512f -mavx512pf -mavx512er -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -msha -maes -mpclmul -mclflushopt -mclwb -mfsgsbase -mptwrite -mrdrnd -mf16c -mfma -mpconfig -mwbnoinvd -mfma4 -mprfchw -mrdpid -mprefetchwt1 -mrdseed -msgx -mxop -mlwp -m3dnow -m3dnowa -mpopcnt -mabm -madx -mbmi -mbmi2 -mlzcnt -mfxsr -mxsave -mxsaveopt -mxsavec -mxsaves -mrtm -mhle -mtbm -mmwaitx -mclzero -mpku -mavx512vbmi2 -mgfni -mvaes -mwaitpkg -mvpclmulqdq -mavx512bitalg -mmovdiri -mmovdir64b -mavx512vpopcntdq -mavx5124fmaps -mavx512vnni -mavx5124vnniw -mcldemote These switches enable the use of instructions in the MMX, SSE, SSE2, SSE3, SSSE3, SSE4, SSE4A, SSE4.1, SSE4.2, AVX, AVX2, AVX512F, AVX512PF, AVX512ER, AVX512CD, AVX512VL, AVX512BW, AVX512DQ, AVX512IFMA, AVX512VBMI, SHA, AES, PCLMUL, CLFLUSHOPT, CLWB, FSGSBASE, PTWRITE, RDRND, F16C, FMA, PCONFIG, WBNOINVD, FMA4, PREFETCHW, RDPID, PREFETCHWT1, RDSEED, SGX, XOP, LWP, 3DNow!, enhanced 3DNow!, POPCNT, ABM, ADX, BMI, BMI2, LZCNT, FXSR, XSAVE, XSAVEOPT, XSAVEC, XSAVES, RTM, HLE, TBM, MWAITX, CLZERO, PKU, AVX512VBMI2, GFNI, VAES, WAITPKG, VPCLMULQDQ, AVX512BITALG, MOVDIRI, MOVDIR64B, AVX512VPOPCNTDQ, AVX5124FMAPS, AVX512VNNI, AVX5124VNNIW, or CLDEMOTE extended instruction sets. Each has a corresponding -mno- option to disable use of these instructions. These extensions are also available as built-in functions: see x86 Built-in Functions, for details of the functions enabled and disabled by these switches. To generate SSE/SSE2 instructions automatically from floating-point code (as opposed to 387 instructions), see -mfpmath=sse. GCC depresses SSEx instructions when -mavx is used. Instead, it generates new AVX instructions or AVX equivalence for all SSEx instructions when needed. These options enable GCC to use these extended instructions in generated code, even without -mfpmath=sse. Applications that perform run-time CPU detection must compile separate files for each supported architecture, using the appropriate flags. In particular, the file containing the CPU detection code should be compiled without these options. -mdump-tune-features This option instructs GCC to dump the names of the x86 performance tuning features and default settings. The names can be used in -mtune-ctrl=feature-list. -mtune-ctrl=feature-list This option is used to do fine grain control of x86 code generation features. feature-list is a comma separated list of feature names. See also -mdump-tune-features. When specified, the feature is turned on if it is not preceded with ^, otherwise, it is turned off. -mtune-ctrl=feature- list is intended to be used by GCC developers. Using it may lead to code paths not covered by testing and can potentially result in compiler ICEs or runtime errors. -mno-default This option instructs GCC to turn off all tunable features. See also -mtune-ctrl=feature-list and -mdump-tune-features. -mcld This option instructs GCC to emit a "cld" instruction in the prologue of functions that use string instructions. String instructions depend on the DF flag to select between autoincrement or autodecrement mode. While the ABI specifies the DF flag to be cleared on function entry, some operating systems violate this specification by not clearing the DF flag in their exception dispatchers. The exception handler can be invoked with the DF flag set, which leads to wrong direction mode when string instructions are used. This option can be enabled by default on 32-bit x86 targets by configuring GCC with the --enable-cld configure option. Generation of "cld" instructions can be suppressed with the -mno-cld compiler option in this case. -mvzeroupper This option instructs GCC to emit a "vzeroupper" instruction before a transfer of control flow out of the function to minimize the AVX to SSE transition penalty as well as remove unnecessary "zeroupper" intrinsics. -mprefer-avx128 This option instructs GCC to use 128-bit AVX instructions instead of 256-bit AVX instructions in the auto-vectorizer. -mprefer-vector-width=opt This option instructs GCC to use opt-bit vector width in instructions instead of default on the selected platform. none No extra limitations applied to GCC other than defined by the selected platform. 128 Prefer 128-bit vector width for instructions. 256 Prefer 256-bit vector width for instructions. 512 Prefer 512-bit vector width for instructions. -mcx16 This option enables GCC to generate "CMPXCHG16B" instructions in 64-bit code to implement compare-and-exchange operations on 16-byte aligned 128-bit objects. This is useful for atomic updates of data structures exceeding one machine word in size. The compiler uses this instruction to implement __sync Builtins. However, for __atomic Builtins operating on 128-bit integers, a library call is always used. -msahf This option enables generation of "SAHF" instructions in 64-bit code. Early Intel Pentium 4 CPUs with Intel 64 support, prior to the introduction of Pentium 4 G1 step in December 2005, lacked the "LAHF" and "SAHF" instructions which are supported by AMD64. These are load and store instructions, respectively, for certain status flags. In 64-bit mode, the "SAHF" instruction is used to optimize "fmod", "drem", and "remainder" built-in functions; see Other Builtins for details. -mmovbe This option enables use of the "movbe" instruction to implement "__builtin_bswap32" and "__builtin_bswap64". -mshstk The -mshstk option enables shadow stack built-in functions from x86 Control-flow Enforcement Technology (CET). -mcrc32 This option enables built-in functions "__builtin_ia32_crc32qi", "__builtin_ia32_crc32hi", "__builtin_ia32_crc32si" and "__builtin_ia32_crc32di" to generate the "crc32" machine instruction. -mrecip This option enables use of "RCPSS" and "RSQRTSS" instructions (and their vectorized variants "RCPPS" and "RSQRTPS") with an additional Newton-Raphson step to increase precision instead of "DIVSS" and "SQRTSS" (and their vectorized variants) for single-precision floating-point arguments. These instructions are generated only when -funsafe-math-optimizations is enabled together with -ffinite-math-only and -fno-trapping-math. Note that while the throughput of the sequence is higher than the throughput of the non-reciprocal instruction, the precision of the sequence can be decreased by up to 2 ulp (i.e. the inverse of 1.0 equals 0.99999994). Note that GCC implements "1.0f/sqrtf(x)" in terms of "RSQRTSS" (or "RSQRTPS") already with -ffast-math (or the above option combination), and doesn't need -mrecip. Also note that GCC emits the above sequence with additional Newton-Raphson step for vectorized single-float division and vectorized "sqrtf(x)" already with -ffast-math (or the above option combination), and doesn't need -mrecip. -mrecip=opt This option controls which reciprocal estimate instructions may be used. opt is a comma-separated list of options, which may be preceded by a ! to invert the option: all Enable all estimate instructions. default Enable the default instructions, equivalent to -mrecip. none Disable all estimate instructions, equivalent to -mno-recip. div Enable the approximation for scalar division. vec-div Enable the approximation for vectorized division. sqrt Enable the approximation for scalar square root. vec-sqrt Enable the approximation for vectorized square root. So, for example, -mrecip=all,!sqrt enables all of the reciprocal approximations, except for square root. -mveclibabi=type Specifies the ABI type to use for vectorizing intrinsics using an external library. Supported values for type are svml for the Intel short vector math library and acml for the AMD math core library. To use this option, both -ftree-vectorize and -funsafe-math-optimizations have to be enabled, and an SVML or ACML ABI-compatible library must be specified at link time. GCC currently emits calls to "vmldExp2", "vmldLn2", "vmldLog102", "vmldPow2", "vmldTanh2", "vmldTan2", "vmldAtan2", "vmldAtanh2", "vmldCbrt2", "vmldSinh2", "vmldSin2", "vmldAsinh2", "vmldAsin2", "vmldCosh2", "vmldCos2", "vmldAcosh2", "vmldAcos2", "vmlsExp4", "vmlsLn4", "vmlsLog104", "vmlsPow4", "vmlsTanh4", "vmlsTan4", "vmlsAtan4", "vmlsAtanh4", "vmlsCbrt4", "vmlsSinh4", "vmlsSin4", "vmlsAsinh4", "vmlsAsin4", "vmlsCosh4", "vmlsCos4", "vmlsAcosh4" and "vmlsAcos4" for corresponding function type when -mveclibabi=svml is used, and "__vrd2_sin", "__vrd2_cos", "__vrd2_exp", "__vrd2_log", "__vrd2_log2", "__vrd2_log10", "__vrs4_sinf", "__vrs4_cosf", "__vrs4_expf", "__vrs4_logf", "__vrs4_log2f", "__vrs4_log10f" and "__vrs4_powf" for the corresponding function type when -mveclibabi=acml is used. -mabi=name Generate code for the specified calling convention. Permissible values are sysv for the ABI used on GNU/Linux and other systems, and ms for the Microsoft ABI. The default is to use the Microsoft ABI when targeting Microsoft Windows and the SysV ABI on all other systems. You can control this behavior for specific functions by using the function attributes "ms_abi" and "sysv_abi". -mforce-indirect-call Force all calls to functions to be indirect. This is useful when using Intel Processor Trace where it generates more precise timing information for function calls. -mmanual-endbr Insert ENDBR instruction at function entry only via the "cf_check" function attribute. This is useful when used with the option -fcf-protection=branch to control ENDBR insertion at the function entry. -mcall-ms2sysv-xlogues Due to differences in 64-bit ABIs, any Microsoft ABI function that calls a System V ABI function must consider RSI, RDI and XMM6-15 as clobbered. By default, the code for saving and restoring these registers is emitted inline, resulting in fairly lengthy prologues and epilogues. Using -mcall-ms2sysv-xlogues emits prologues and epilogues that use stubs in the static portion of libgcc to perform these saves and restores, thus reducing function size at the cost of a few extra instructions. -mtls-dialect=type Generate code to access thread-local storage using the gnu or gnu2 conventions. gnu is the conservative default; gnu2 is more efficient, but it may add compile- and run-time requirements that cannot be satisfied on all systems. -mpush-args -mno-push-args Use PUSH operations to store outgoing parameters. This method is shorter and usually equally fast as method using SUB/MOV operations and is enabled by default. In some cases disabling it may improve performance because of improved scheduling and reduced dependencies. -maccumulate-outgoing-args If enabled, the maximum amount of space required for outgoing arguments is computed in the function prologue. This is faster on most modern CPUs because of reduced dependencies, improved scheduling and reduced stack usage when the preferred stack boundary is not equal to 2. The drawback is a notable increase in code size. This switch implies -mno-push-args. -mthreads Support thread-safe exception handling on MinGW. Programs that rely on thread-safe exception handling must compile and link all code with the -mthreads option. When compiling, -mthreads defines -D_MT; when linking, it links in a special thread helper library -lmingwthrd which cleans up per-thread exception-handling data. -mms-bitfields -mno-ms-bitfields Enable/disable bit-field layout compatible with the native Microsoft Windows compiler. If "packed" is used on a structure, or if bit-fields are used, it may be that the Microsoft ABI lays out the structure differently than the way GCC normally does. Particularly when moving packed data between functions compiled with GCC and the native Microsoft compiler (either via function call or as data in a file), it may be necessary to access either format. This option is enabled by default for Microsoft Windows targets. This behavior can also be controlled locally by use of variable or type attributes. For more information, see x86 Variable Attributes and x86 Type Attributes. The Microsoft structure layout algorithm is fairly simple with the exception of the bit-field packing. The padding and alignment of members of structures and whether a bit-field can straddle a storage-unit boundary are determine by these rules: 1. Structure members are stored sequentially in the order in which they are declared: the first member has the lowest memory address and the last member the highest. 2. Every data object has an alignment requirement. The alignment requirement for all data except structures, unions, and arrays is either the size of the object or the current packing size (specified with either the "aligned" attribute or the "pack" pragma), whichever is less. For structures, unions, and arrays, the alignment requirement is the largest alignment requirement of its members. Every object is allocated an offset so that: offset % alignment_requirement == 0 3. Adjacent bit-fields are packed into the same 1-, 2-, or 4-byte allocation unit if the integral types are the same size and if the next bit-field fits into the current allocation unit without crossing the boundary imposed by the common alignment requirements of the bit-fields. MSVC interprets zero-length bit-fields in the following ways: 1. If a zero-length bit-field is inserted between two bit- fields that are normally coalesced, the bit-fields are not coalesced. For example: struct { unsigned long bf_1 : 12; unsigned long : 0; unsigned long bf_2 : 12; } t1; The size of "t1" is 8 bytes with the zero-length bit- field. If the zero-length bit-field were removed, "t1"'s size would be 4 bytes. 2. If a zero-length bit-field is inserted after a bit-field, "foo", and the alignment of the zero-length bit-field is greater than the member that follows it, "bar", "bar" is aligned as the type of the zero-length bit-field. For example: struct { char foo : 4; short : 0; char bar; } t2; struct { char foo : 4; short : 0; double bar; } t3; For "t2", "bar" is placed at offset 2, rather than offset 1. Accordingly, the size of "t2" is 4. For "t3", the zero-length bit-field does not affect the alignment of "bar" or, as a result, the size of the structure. Taking this into account, it is important to note the following: 1. If a zero-length bit-field follows a normal bit-field, the type of the zero-length bit-field may affect the alignment of the structure as whole. For example, "t2" has a size of 4 bytes, since the zero-length bit-field follows a normal bit-field, and is of type short. 2. Even if a zero-length bit-field is not followed by a normal bit-field, it may still affect the alignment of the structure: struct { char foo : 6; long : 0; } t4; Here, "t4" takes up 4 bytes. 3. Zero-length bit-fields following non-bit-field members are ignored: struct { char foo; long : 0; char bar; } t5; Here, "t5" takes up 2 bytes. -mno-align-stringops Do not align the destination of inlined string operations. This switch reduces code size and improves performance in case the destination is already aligned, but GCC doesn't know about it. -minline-all-stringops By default GCC inlines string operations only when the destination is known to be aligned to least a 4-byte boundary. This enables more inlining and increases code size, but may improve performance of code that depends on fast "memcpy", "strlen", and "memset" for short lengths. -minline-stringops-dynamically For string operations of unknown size, use run-time checks with inline code for small blocks and a library call for large blocks. -mstringop-strategy=alg Override the internal decision heuristic for the particular algorithm to use for inlining string operations. The allowed values for alg are: rep_byte rep_4byte rep_8byte Expand using i386 "rep" prefix of the specified size. byte_loop loop unrolled_loop Expand into an inline loop. libcall Always use a library call. -mmemcpy-strategy=strategy Override the internal decision heuristic to decide if "__builtin_memcpy" should be inlined and what inline algorithm to use when the expected size of the copy operation is known. strategy is a comma-separated list of alg:max_size:dest_align triplets. alg is specified in -mstringop-strategy, max_size specifies the max byte size with which inline algorithm alg is allowed. For the last triplet, the max_size must be "-1". The max_size of the triplets in the list must be specified in increasing order. The minimal byte size for alg is 0 for the first triplet and "max_size + 1" of the preceding range. -mmemset-strategy=strategy The option is similar to -mmemcpy-strategy= except that it is to control "__builtin_memset" expansion. -momit-leaf-frame-pointer Don't keep the frame pointer in a register for leaf functions. This avoids the instructions to save, set up, and restore frame pointers and makes an extra register available in leaf functions. The option -fomit-leaf-frame-pointer removes the frame pointer for leaf functions, which might make debugging harder. -mtls-direct-seg-refs -mno-tls-direct-seg-refs Controls whether TLS variables may be accessed with offsets from the TLS segment register (%gs for 32-bit, %fs for 64-bit), or whether the thread base pointer must be added. Whether or not this is valid depends on the operating system, and whether it maps the segment to cover the entire TLS area. For systems that use the GNU C Library, the default is on. -msse2avx -mno-sse2avx Specify that the assembler should encode SSE instructions with VEX prefix. The option -mavx turns this on by default. -mfentry -mno-fentry If profiling is active (-pg), put the profiling counter call before the prologue. Note: On x86 architectures the attribute "ms_hook_prologue" isn't possible at the moment for -mfentry and -pg. -mrecord-mcount -mno-record-mcount If profiling is active (-pg), generate a __mcount_loc section that contains pointers to each profiling call. This is useful for automatically patching and out calls. -mnop-mcount -mno-nop-mcount If profiling is active (-pg), generate the calls to the profiling functions as NOPs. This is useful when they should be patched in later dynamically. This is likely only useful together with -mrecord-mcount. -minstrument-return=type Instrument function exit in -pg -mfentry instrumented functions with call to specified function. This only instruments true returns ending with ret, but not sibling calls ending with jump. Valid types are none to not instrument, call to generate a call to __return__, or nop5 to generate a 5 byte nop. -mrecord-return -mno-record-return Generate a __return_loc section pointing to all return instrumentation code. -mfentry-name=name Set name of __fentry__ symbol called at function entry for -pg -mfentry functions. -mfentry-section=name Set name of section to record -mrecord-mcount calls (default __mcount_loc). -mskip-rax-setup -mno-skip-rax-setup When generating code for the x86-64 architecture with SSE extensions disabled, -mskip-rax-setup can be used to skip setting up RAX register when there are no variable arguments passed in vector registers. Warning: Since RAX register is used to avoid unnecessarily saving vector registers on stack when passing variable arguments, the impacts of this option are callees may waste some stack space, misbehave or jump to a random location. GCC 4.4 or newer don't have those issues, regardless the RAX register value. -m8bit-idiv -mno-8bit-idiv On some processors, like Intel Atom, 8-bit unsigned integer divide is much faster than 32-bit/64-bit integer divide. This option generates a run-time check. If both dividend and divisor are within range of 0 to 255, 8-bit unsigned integer divide is used instead of 32-bit/64-bit integer divide. -mavx256-split-unaligned-load -mavx256-split-unaligned-store Split 32-byte AVX unaligned load and store. -mstack-protector-guard=guard -mstack-protector-guard-reg=reg -mstack-protector-guard-offset=offset Generate stack protection code using canary at guard. Supported locations are global for global canary or tls for per-thread canary in the TLS block (the default). This option has effect only when -fstack-protector or -fstack-protector-all is specified. With the latter choice the options -mstack-protector-guard-reg=reg and -mstack-protector-guard-offset=offset furthermore specify which segment register (%fs or %gs) to use as base register for reading the canary, and from what offset from that base register. The default for those is as specified in the relevant ABI. -mgeneral-regs-only Generate code that uses only the general-purpose registers. This prevents the compiler from using floating-point, vector, mask and bound registers. -mindirect-branch=choice Convert indirect call and jump with choice. The default is keep, which keeps indirect call and jump unmodified. thunk converts indirect call and jump to call and return thunk. thunk-inline converts indirect call and jump to inlined call and return thunk. thunk-extern converts indirect call and jump to external call and return thunk provided in a separate object file. You can control this behavior for a specific function by using the function attribute "indirect_branch". Note that -mcmodel=large is incompatible with -mindirect-branch=thunk and -mindirect-branch=thunk-extern since the thunk function may not be reachable in the large code model. Note that -mindirect-branch=thunk-extern is compatible with -fcf-protection=branch since the external thunk can be made to enable control-flow check. -mfunction-return=choice Convert function return with choice. The default is keep, which keeps function return unmodified. thunk converts function return to call and return thunk. thunk-inline converts function return to inlined call and return thunk. thunk-extern converts function return to external call and return thunk provided in a separate object file. You can control this behavior for a specific function by using the function attribute "function_return". Note that -mindirect-return=thunk-extern is compatible with -fcf-protection=branch since the external thunk can be made to enable control-flow check. Note that -mcmodel=large is incompatible with -mfunction-return=thunk and -mfunction-return=thunk-extern since the thunk function may not be reachable in the large code model. -mindirect-branch-register Force indirect call and jump via register. These -m switches are supported in addition to the above on x86-64 processors in 64-bit environments. -m32 -m64 -mx32 -m16 -miamcu Generate code for a 16-bit, 32-bit or 64-bit environment. The -m32 option sets "int", "long", and pointer types to 32 bits, and generates code that runs on any i386 system. The -m64 option sets "int" to 32 bits and "long" and pointer types to 64 bits, and generates code for the x86-64 architecture. For Darwin only the -m64 option also turns off the -fno-pic and -mdynamic-no-pic options. The -mx32 option sets "int", "long", and pointer types to 32 bits, and generates code for the x86-64 architecture. The -m16 option is the same as -m32, except for that it outputs the ".code16gcc" assembly directive at the beginning of the assembly output so that the binary can run in 16-bit mode. The -miamcu option generates code which conforms to Intel MCU psABI. It requires the -m32 option to be turned on. -mno-red-zone Do not use a so-called "red zone" for x86-64 code. The red zone is mandated by the x86-64 ABI; it is a 128-byte area beyond the location of the stack pointer that is not modified by signal or interrupt handlers and therefore can be used for temporary data without adjusting the stack pointer. The flag -mno-red-zone disables this red zone. -mcmodel=small Generate code for the small code model: the program and its symbols must be linked in the lower 2 GB of the address space. Pointers are 64 bits. Programs can be statically or dynamically linked. This is the default code model. -mcmodel=kernel Generate code for the kernel code model. The kernel runs in the negative 2 GB of the address space. This model has to be used for Linux kernel code. -mcmodel=medium Generate code for the medium model: the program is linked in the lower 2 GB of the address space. Small symbols are also placed there. Symbols with sizes larger than -mlarge-data-threshold are put into large data or BSS sections and can be located above 2GB. Programs can be statically or dynamically linked. -mcmodel=large Generate code for the large model. This model makes no assumptions about addresses and sizes of sections. -maddress-mode=long Generate code for long address mode. This is only supported for 64-bit and x32 environments. It is the default address mode for 64-bit environments. -maddress-mode=short Generate code for short address mode. This is only supported for 32-bit and x32 environments. It is the default address mode for 32-bit and x32 environments. x86 Windows Options These additional options are available for Microsoft Windows targets: -mconsole This option specifies that a console application is to be generated, by instructing the linker to set the PE header subsystem type required for console applications. This option is available for Cygwin and MinGW targets and is enabled by default on those targets. -mdll This option is available for Cygwin and MinGW targets. It specifies that a DLL---a dynamic link library---is to be generated, enabling the selection of the required runtime startup object and entry point. -mnop-fun-dllimport This option is available for Cygwin and MinGW targets. It specifies that the "dllimport" attribute should be ignored. -mthread This option is available for MinGW targets. It specifies that MinGW-specific thread support is to be used. -municode This option is available for MinGW-w64 targets. It causes the "UNICODE" preprocessor macro to be predefined, and chooses Unicode-capable runtime startup code. -mwin32 This option is available for Cygwin and MinGW targets. It specifies that the typical Microsoft Windows predefined macros are to be set in the pre-processor, but does not influence the choice of runtime library/startup code. -mwindows This option is available for Cygwin and MinGW targets. It specifies that a GUI application is to be generated by instructing the linker to set the PE header subsystem type appropriately. -fno-set-stack-executable This option is available for MinGW targets. It specifies that the executable flag for the stack used by nested functions isn't set. This is necessary for binaries running in kernel mode of Microsoft Windows, as there the User32 API, which is used to set executable privileges, isn't available. -fwritable-relocated-rdata This option is available for MinGW and Cygwin targets. It specifies that relocated-data in read-only section is put into the ".data" section. This is a necessary for older runtimes not supporting modification of ".rdata" sections for pseudo-relocation. -mpe-aligned-commons This option is available for Cygwin and MinGW targets. It specifies that the GNU extension to the PE file format that permits the correct alignment of COMMON variables should be used when generating code. It is enabled by default if GCC detects that the target assembler found during configuration supports the feature. See also under x86 Options for standard options. Xstormy16 Options These options are defined for Xstormy16: -msim Choose startup files and linker script suitable for the simulator. Xtensa Options These options are supported for Xtensa targets: -mconst16 -mno-const16 Enable or disable use of "CONST16" instructions for loading constant values. The "CONST16" instruction is currently not a standard option from Tensilica. When enabled, "CONST16" instructions are always used in place of the standard "L32R" instructions. The use of "CONST16" is enabled by default only if the "L32R" instruction is not available. -mfused-madd -mno-fused-madd Enable or disable use of fused multiply/add and multiply/subtract instructions in the floating-point option. This has no effect if the floating-point option is not also enabled. Disabling fused multiply/add and multiply/subtract instructions forces the compiler to use separate instructions for the multiply and add/subtract operations. This may be desirable in some cases where strict IEEE 754-compliant results are required: the fused multiply add/subtract instructions do not round the intermediate result, thereby producing results with more bits of precision than specified by the IEEE standard. Disabling fused multiply add/subtract instructions also ensures that the program output is not sensitive to the compiler's ability to combine multiply and add/subtract operations. -mserialize-volatile -mno-serialize-volatile When this option is enabled, GCC inserts "MEMW" instructions before "volatile" memory references to guarantee sequential consistency. The default is -mserialize-volatile. Use -mno-serialize-volatile to omit the "MEMW" instructions. -mforce-no-pic For targets, like GNU/Linux, where all user-mode Xtensa code must be position-independent code (PIC), this option disables PIC for compiling kernel code. -mtext-section-literals -mno-text-section-literals These options control the treatment of literal pools. The default is -mno-text-section-literals, which places literals in a separate section in the output file. This allows the literal pool to be placed in a data RAM/ROM, and it also allows the linker to combine literal pools from separate object files to remove redundant literals and improve code size. With -mtext-section-literals, the literals are interspersed in the text section in order to keep them as close as possible to their references. This may be necessary for large assembly files. Literals for each function are placed right before that function. -mauto-litpools -mno-auto-litpools These options control the treatment of literal pools. The default is -mno-auto-litpools, which places literals in a separate section in the output file unless -mtext-section-literals is used. With -mauto-litpools the literals are interspersed in the text section by the assembler. Compiler does not produce explicit ".literal" directives and loads literals into registers with "MOVI" instructions instead of "L32R" to let the assembler do relaxation and place literals as necessary. This option allows assembler to create several literal pools per function and assemble very big functions, which may not be possible with -mtext-section-literals. -mtarget-align -mno-target-align When this option is enabled, GCC instructs the assembler to automatically align instructions to reduce branch penalties at the expense of some code density. The assembler attempts to widen density instructions to align branch targets and the instructions following call instructions. If there are not enough preceding safe density instructions to align a target, no widening is performed. The default is -mtarget-align. These options do not affect the treatment of auto-aligned instructions like "LOOP", which the assembler always aligns, either by widening density instructions or by inserting NOP instructions. -mlongcalls -mno-longcalls When this option is enabled, GCC instructs the assembler to translate direct calls to indirect calls unless it can determine that the target of a direct call is in the range allowed by the call instruction. This translation typically occurs for calls to functions in other source files. Specifically, the assembler translates a direct "CALL" instruction into an "L32R" followed by a "CALLX" instruction. The default is -mno-longcalls. This option should be used in programs where the call target can potentially be out of range. This option is implemented in the assembler, not the compiler, so the assembly code generated by GCC still shows direct call instructions---look at the disassembled object code to see the actual instructions. Note that the assembler uses an indirect call for every cross-file call, not just those that really are out of range. zSeries Options These are listed under ENVIRONMENT top This section describes several environment variables that affect how GCC operates. Some of them work by specifying directories or prefixes to use when searching for various kinds of files. Some are used to specify other aspects of the compilation environment. Note that you can also specify places to search using options such as -B, -I and -L. These take precedence over places specified using environment variables, which in turn take precedence over those specified by the configuration of GCC. LANG LC_CTYPE LC_MESSAGES LC_ALL These environment variables control the way that GCC uses localization information which allows GCC to work with different national conventions. GCC inspects the locale categories LC_CTYPE and LC_MESSAGES if it has been configured to do so. These locale categories can be set to any value supported by your installation. A typical value is en_GB.UTF-8 for English in the United Kingdom encoded in UTF-8. The LC_CTYPE environment variable specifies character classification. GCC uses it to determine the character boundaries in a string; this is needed for some multibyte encodings that contain quote and escape characters that are otherwise interpreted as a string end or escape. The LC_MESSAGES environment variable specifies the language to use in diagnostic messages. If the LC_ALL environment variable is set, it overrides the value of LC_CTYPE and LC_MESSAGES; otherwise, LC_CTYPE and LC_MESSAGES default to the value of the LANG environment variable. If none of these variables are set, GCC defaults to traditional C English behavior. TMPDIR If TMPDIR is set, it specifies the directory to use for temporary files. GCC uses temporary files to hold the output of one stage of compilation which is to be used as input to the next stage: for example, the output of the preprocessor, which is the input to the compiler proper. GCC_COMPARE_DEBUG Setting GCC_COMPARE_DEBUG is nearly equivalent to passing -fcompare-debug to the compiler driver. See the documentation of this option for more details. GCC_EXEC_PREFIX If GCC_EXEC_PREFIX is set, it specifies a prefix to use in the names of the subprograms executed by the compiler. No slash is added when this prefix is combined with the name of a subprogram, but you can specify a prefix that ends with a slash if you wish. If GCC_EXEC_PREFIX is not set, GCC attempts to figure out an appropriate prefix to use based on the pathname it is invoked with. If GCC cannot find the subprogram using the specified prefix, it tries looking in the usual places for the subprogram. The default value of GCC_EXEC_PREFIX is prefix/lib/gcc/ where prefix is the prefix to the installed compiler. In many cases prefix is the value of "prefix" when you ran the configure script. Other prefixes specified with -B take precedence over this prefix. This prefix is also used for finding files such as crt0.o that are used for linking. In addition, the prefix is used in an unusual way in finding the directories to search for header files. For each of the standard directories whose name normally begins with /usr/local/lib/gcc (more precisely, with the value of GCC_INCLUDE_DIR), GCC tries replacing that beginning with the specified prefix to produce an alternate directory name. Thus, with -Bfoo/, GCC searches foo/bar just before it searches the standard directory /usr/local/lib/bar. If a standard directory begins with the configured prefix then the value of prefix is replaced by GCC_EXEC_PREFIX when looking for header files. COMPILER_PATH The value of COMPILER_PATH is a colon-separated list of directories, much like PATH. GCC tries the directories thus specified when searching for subprograms, if it cannot find the subprograms using GCC_EXEC_PREFIX. LIBRARY_PATH The value of LIBRARY_PATH is a colon-separated list of directories, much like PATH. When configured as a native compiler, GCC tries the directories thus specified when searching for special linker files, if it cannot find them using GCC_EXEC_PREFIX. Linking using GCC also uses these directories when searching for ordinary libraries for the -l option (but directories specified with -L come first). LANG This variable is used to pass locale information to the compiler. One way in which this information is used is to determine the character set to be used when character literals, string literals and comments are parsed in C and C++. When the compiler is configured to allow multibyte characters, the following values for LANG are recognized: C-JIS Recognize JIS characters. C-SJIS Recognize SJIS characters. C-EUCJP Recognize EUCJP characters. If LANG is not defined, or if it has some other value, then the compiler uses "mblen" and "mbtowc" as defined by the default locale to recognize and translate multibyte characters. Some additional environment variables affect the behavior of the preprocessor. CPATH C_INCLUDE_PATH CPLUS_INCLUDE_PATH OBJC_INCLUDE_PATH Each variable's value is a list of directories separated by a special character, much like PATH, in which to look for header files. The special character, "PATH_SEPARATOR", is target-dependent and determined at GCC build time. For Microsoft Windows-based targets it is a semicolon, and for almost all other targets it is a colon. CPATH specifies a list of directories to be searched as if specified with -I, but after any paths given with -I options on the command line. This environment variable is used regardless of which language is being preprocessed. The remaining environment variables apply only when preprocessing the particular language indicated. Each specifies a list of directories to be searched as if specified with -isystem, but after any paths given with -isystem options on the command line. In all these variables, an empty element instructs the compiler to search its current working directory. Empty elements can appear at the beginning or end of a path. For instance, if the value of CPATH is ":/special/include", that has the same effect as -I. -I/special/include. DEPENDENCIES_OUTPUT If this variable is set, its value specifies how to output dependencies for Make based on the non-system header files processed by the compiler. System header files are ignored in the dependency output. The value of DEPENDENCIES_OUTPUT can be just a file name, in which case the Make rules are written to that file, guessing the target name from the source file name. Or the value can have the form file target, in which case the rules are written to file file using target as the target name. In other words, this environment variable is equivalent to combining the options -MM and -MF, with an optional -MT switch too. SUNPRO_DEPENDENCIES This variable is the same as DEPENDENCIES_OUTPUT (see above), except that system header files are not ignored, so it implies -M rather than -MM. However, the dependence on the main input file is omitted. SOURCE_DATE_EPOCH If this variable is set, its value specifies a UNIX timestamp to be used in replacement of the current date and time in the "__DATE__" and "__TIME__" macros, so that the embedded timestamps become reproducible. The value of SOURCE_DATE_EPOCH must be a UNIX timestamp, defined as the number of seconds (excluding leap seconds) since 01 Jan 1970 00:00:00 represented in ASCII; identical to the output of @command{date +%s} on GNU/Linux and other systems that support the %s extension in the "date" command. The value should be a known timestamp such as the last modification time of the source or package and it should be set by the build process. BUGS top For instructions on reporting bugs, see <https://gcc.gnu.org/bugs/ >. FOOTNOTES top 1. On some systems, gcc -shared needs to build supplementary stub code for constructors to work. On multi-libbed systems, gcc -shared must select the correct support libraries to link against. Failing to supply the correct flags may lead to subtle defects. Supplying them in cases where they are not necessary is innocuous. SEE ALSO top gpl(7), gfdl(7), fsf-funding(7), cpp(1), gcov(1), as(1), ld(1), gdb(1), dbx(1) and the Info entries for gcc, cpp, as, ld, binutils and gdb. AUTHOR top See the Info entry for gcc, or <http://gcc.gnu.org/onlinedocs/gcc/Contributors.html >, for contributors to GCC. COPYRIGHT top Copyright (c) 1988-2019 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being "GNU General Public License" and "Funding Free Software", the Front-Cover texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the gfdl(7) man page. (a) The FSF's Front-Cover Text is: A GNU Manual (b) The FSF's Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. COLOPHON top This page is part of the gcc (GNU Compiler Collection) project. Information about the project can be found at http://gcc.gnu.org/. If you have a bug report for this manual page, see http://gcc.gnu.org/bugs/. This page was obtained from the tarball gcc-9.5.0.tar.xz fetched from ftp://ftp.gwdg.de/pub/misc/gcc/releases/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org gcc-9.5.0 2022-05-27 GCC(1) Pages that refer to this page: as(1), dpkg-architecture(1), uselib(2), backtrace(3), dladdr(3), dlopen(3), lttng-ust-cyg-profile(3), offsetof(3), printf(3), sincos(3), strftime(3), feature_test_macros(7), hier(7), math_error(7), warning::debuginfo(7stap) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# gcc\n\n> Preprocess and compile C and C++ source files, then assemble and link them together.\n> More information: <https://gcc.gnu.org>.\n\n- Compile multiple source files into an executable:\n\n`gcc {{path/to/source1.c path/to/source2.c ...}} -o {{path/to/output_executable}}`\n\n- Show common warnings, debug symbols in output, and optimize without affecting debugging:\n\n`gcc {{path/to/source.c}} -Wall -g -Og -o {{path/to/output_executable}}`\n\n- Include libraries from a different path:\n\n`gcc {{path/to/source.c}} -o {{path/to/output_executable}} -I{{path/to/header}} -L{{path/to/library}} -l{{library_name}}`\n\n- Compile source code into Assembler instructions:\n\n`gcc -S {{path/to/source.c}}`\n\n- Compile source code into an object file without linking:\n\n`gcc -c {{path/to/source.c}}`\n\n- Optimize the compiled program for performance:\n\n`gcc {{path/to/source.c}} -O{{1|2|3|fast}} -o {{path/to/output_executable}}`\n
gcov
gcov(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training gcov(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON GCOV(1) GNU GCOV(1) NAME top gcov - coverage testing tool SYNOPSIS top gcov [-v|--version] [-h|--help] [-a|--all-blocks] [-b|--branch-probabilities] [-c|--branch-counts] [-d|--display-progress] [-f|--function-summaries] [-i|--json-format] [-j|--human-readable] [-k|--use-colors] [-l|--long-file-names] [-m|--demangled-names] [-n|--no-output] [-o|--object-directory directory|file] [-p|--preserve-paths] [-q|--use-hotness-colors] [-r|--relative-only] [-s|--source-prefix directory] [-t|--stdout] [-u|--unconditional-branches] [-x|--hash-filenames] files DESCRIPTION top gcov is a test coverage program. Use it in concert with GCC to analyze your programs to help create more efficient, faster running code and to discover untested parts of your program. You can use gcov as a profiling tool to help discover where your optimization efforts will best affect your code. You can also use gcov along with the other profiling tool, gprof, to assess which parts of your code use the greatest amount of computing time. Profiling tools help you analyze your code's performance. Using a profiler such as gcov or gprof, you can find out some basic performance statistics, such as: * how often each line of code executes * what lines of code are actually executed * how much computing time each section of code uses Once you know these things about how your code works when compiled, you can look at each module to see which modules should be optimized. gcov helps you determine where to work on optimization. Software developers also use coverage testing in concert with testsuites, to make sure software is actually good enough for a release. Testsuites can verify that a program works as expected; a coverage program tests to see how much of the program is exercised by the testsuite. Developers can then determine what kinds of test cases need to be added to the testsuites to create both better testing and a better final product. You should compile your code without optimization if you plan to use gcov because the optimization, by combining some lines of code into one function, may not give you as much information as you need to look for `hot spots' where the code is using a great deal of computer time. Likewise, because gcov accumulates statistics by line (at the lowest resolution), it works best with a programming style that places only one statement on each line. If you use complicated macros that expand to loops or to other control structures, the statistics are less helpful---they only report on the line where the macro call appears. If your complex macros behave like functions, you can replace them with inline functions to solve this problem. gcov creates a logfile called sourcefile.gcov which indicates how many times each line of a source file sourcefile.c has executed. You can use these logfiles along with gprof to aid in fine-tuning the performance of your programs. gprof gives timing information you can use along with the information you get from gcov. gcov works only on code compiled with GCC. It is not compatible with any other profiling or test coverage mechanism. OPTIONS top -a --all-blocks Write individual execution counts for every basic block. Normally gcov outputs execution counts only for the main blocks of a line. With this option you can determine if blocks within a single line are not being executed. -b --branch-probabilities Write branch frequencies to the output file, and write branch summary info to the standard output. This option allows you to see how often each branch in your program was taken. Unconditional branches will not be shown, unless the -u option is given. -c --branch-counts Write branch frequencies as the number of branches taken, rather than the percentage of branches taken. -d --display-progress Display the progress on the standard output. -f --function-summaries Output summaries for each function in addition to the file level summary. -h --help Display help about using gcov (on the standard output), and exit without doing any further processing. -i --json-format Output gcov file in an easy-to-parse JSON intermediate format which does not require source code for generation. The JSON file is compressed with gzip compression algorithm and the files have .gcov.json.gz extension. Structure of the JSON is following: { "current_working_directory": <current_working_directory>, "data_file": <data_file>, "format_version": <format_version>, "gcc_version": <gcc_version> "files": [<file>] } Fields of the root element have following semantics: * current_working_directory: working directory where a compilation unit was compiled * data_file: name of the data file (GCDA) * format_version: semantic version of the format * gcc_version: version of the GCC compiler Each file has the following form: { "file": <file_name>, "functions": [<function>], "lines": [<line>] } Fields of the file element have following semantics: * file_name: name of the source file Each function has the following form: { "blocks": <blocks>, "blocks_executed": <blocks_executed>, "demangled_name": "<demangled_name>, "end_column": <end_column>, "end_line": <end_line>, "execution_count": <execution_count>, "name": <name>, "start_column": <start_column> "start_line": <start_line> } Fields of the function element have following semantics: * blocks: number of blocks that are in the function * blocks_executed: number of executed blocks of the function * demangled_name: demangled name of the function * end_column: column in the source file where the function ends * end_line: line in the source file where the function ends * execution_count: number of executions of the function * name: name of the function * start_column: column in the source file where the function begins * start_line: line in the source file where the function begins Note that line numbers and column numbers number from 1. In the current implementation, start_line and start_column do not include any template parameters and the leading return type but that this is likely to be fixed in the future. Each line has the following form: { "branches": [<branch>], "count": <count>, "line_number": <line_number>, "unexecuted_block": <unexecuted_block> "function_name": <function_name>, } Branches are present only with -b option. Fields of the line element have following semantics: * count: number of executions of the line * line_number: line number * unexecuted_block: flag whether the line contains an unexecuted block (not all statements on the line are executed) * function_name: a name of a function this line belongs to (for a line with an inlined statements can be not set) Each branch has the following form: { "count": <count>, "fallthrough": <fallthrough>, "throw": <throw> } Fields of the branch element have following semantics: * count: number of executions of the branch * fallthrough: true when the branch is a fall through branch * throw: true when the branch is an exceptional branch -j --human-readable Write counts in human readable format (like 24.6k). -k --use-colors Use colors for lines of code that have zero coverage. We use red color for non-exceptional lines and cyan for exceptional. Same colors are used for basic blocks with -a option. -l --long-file-names Create long file names for included source files. For example, if the header file x.h contains code, and was included in the file a.c, then running gcov on the file a.c will produce an output file called a.c##x.h.gcov instead of x.h.gcov. This can be useful if x.h is included in multiple source files and you want to see the individual contributions. If you use the -p option, both the including and included file names will be complete path names. -m --demangled-names Display demangled function names in output. The default is to show mangled function names. -n --no-output Do not create the gcov output file. -o directory|file --object-directory directory --object-file file Specify either the directory containing the gcov data files, or the object path name. The .gcno, and .gcda data files are searched for using this option. If a directory is specified, the data files are in that directory and named after the input file name, without its extension. If a file is specified here, the data files are named after that file, without its extension. -p --preserve-paths Preserve complete path information in the names of generated .gcov files. Without this option, just the filename component is used. With this option, all directories are used, with / characters translated to # characters, . directory components removed and unremoveable .. components renamed to ^. This is useful if sourcefiles are in several different directories. -q --use-hotness-colors Emit perf-like colored output for hot lines. Legend of the color scale is printed at the very beginning of the output file. -r --relative-only Only output information about source files with a relative pathname (after source prefix elision). Absolute paths are usually system header files and coverage of any inline functions therein is normally uninteresting. -s directory --source-prefix directory A prefix for source file names to remove when generating the output coverage files. This option is useful when building in a separate directory, and the pathname to the source directory is not wanted when determining the output file names. Note that this prefix detection is applied before determining whether the source file is absolute. -t --stdout Output to standard output instead of output files. -u --unconditional-branches When branch probabilities are given, include those of unconditional branches. Unconditional branches are normally not interesting. -v --version Display the gcov version number (on the standard output), and exit without doing any further processing. -w --verbose Print verbose informations related to basic blocks and arcs. -x --hash-filenames When using --preserve-paths, gcov uses the full pathname of the source files to create an output filename. This can lead to long filenames that can overflow filesystem limits. This option creates names of the form source-file##md5.gcov, where the source-file component is the final filename part and the md5 component is calculated from the full mangled name that would have been used otherwise. The option is an alternative to the --preserve-paths on systems which have a filesystem limit. gcov should be run with the current directory the same as that when you invoked the compiler. Otherwise it will not be able to locate the source files. gcov produces files called mangledname.gcov in the current directory. These contain the coverage information of the source file they correspond to. One .gcov file is produced for each source (or header) file containing code, which was compiled to produce the data files. The mangledname part of the output file name is usually simply the source file name, but can be something more complicated if the -l or -p options are given. Refer to those options for details. If you invoke gcov with multiple input files, the contributions from each input file are summed. Typically you would invoke it with the same list of files as the final link of your executable. The .gcov files contain the : separated fields along with program source code. The format is <execution_count>:<line_number>:<source line text> Additional block information may succeed each line, when requested by command line option. The execution_count is - for lines containing no code. Unexecuted lines are marked ##### or =====, depending on whether they are reachable by non-exceptional paths or only exceptional paths such as C++ exception handlers, respectively. Given the -a option, unexecuted blocks are marked $$$$$ or %%%%%, depending on whether a basic block is reachable via non-exceptional or exceptional paths. Executed basic blocks having a statement with zero execution_count end with * character and are colored with magenta color with the -k option. This functionality is not supported in Ada. Note that GCC can completely remove the bodies of functions that are not needed -- for instance if they are inlined everywhere. Such functions are marked with -, which can be confusing. Use the -fkeep-inline-functions and -fkeep-static-functions options to retain these functions and allow gcov to properly show their execution_count. Some lines of information at the start have line_number of zero. These preamble lines are of the form -:0:<tag>:<value> The ordering and number of these preamble lines will be augmented as gcov development progresses --- do not rely on them remaining unchanged. Use tag to locate a particular preamble line. The additional block information is of the form <tag> <information> The information is human readable, but designed to be simple enough for machine parsing too. When printing percentages, 0% and 100% are only printed when the values are exactly 0% and 100% respectively. Other values which would conventionally be rounded to 0% or 100% are instead printed as the nearest non-boundary value. When using gcov, you must first compile your program with a special GCC option --coverage. This tells the compiler to generate additional information needed by gcov (basically a flow graph of the program) and also includes additional code in the object files for generating the extra profiling information needed by gcov. These additional files are placed in the directory where the object file is located. Running the program will cause profile output to be generated. For each source file compiled with -fprofile-arcs, an accompanying .gcda file will be placed in the object file directory. Running gcov with your program's source file names as arguments will now produce a listing of the code along with frequency of execution for each line. For example, if your program is called tmp.cpp, this is what you see when you use the basic gcov facility: $ g++ --coverage tmp.cpp $ a.out $ gcov tmp.cpp -m File 'tmp.cpp' Lines executed:92.86% of 14 Creating 'tmp.cpp.gcov' The file tmp.cpp.gcov contains output from gcov. Here is a sample: -: 0:Source:tmp.cpp -: 0:Working directory:/home/gcc/testcase -: 0:Graph:tmp.gcno -: 0:Data:tmp.gcda -: 0:Runs:1 -: 0:Programs:1 -: 1:#include <stdio.h> -: 2: -: 3:template<class T> -: 4:class Foo -: 5:{ -: 6: public: 1*: 7: Foo(): b (1000) {} ------------------ Foo<char>::Foo(): #####: 7: Foo(): b (1000) {} ------------------ Foo<int>::Foo(): 1: 7: Foo(): b (1000) {} ------------------ 2*: 8: void inc () { b++; } ------------------ Foo<char>::inc(): #####: 8: void inc () { b++; } ------------------ Foo<int>::inc(): 2: 8: void inc () { b++; } ------------------ -: 9: -: 10: private: -: 11: int b; -: 12:}; -: 13: -: 14:template class Foo<int>; -: 15:template class Foo<char>; -: 16: -: 17:int 1: 18:main (void) -: 19:{ -: 20: int i, total; 1: 21: Foo<int> counter; -: 22: 1: 23: counter.inc(); 1: 24: counter.inc(); 1: 25: total = 0; -: 26: 11: 27: for (i = 0; i < 10; i++) 10: 28: total += i; -: 29: 1*: 30: int v = total > 100 ? 1 : 2; -: 31: 1: 32: if (total != 45) #####: 33: printf ("Failure\n"); -: 34: else 1: 35: printf ("Success\n"); 1: 36: return 0; -: 37:} Note that line 7 is shown in the report multiple times. First occurrence presents total number of execution of the line and the next two belong to instances of class Foo constructors. As you can also see, line 30 contains some unexecuted basic blocks and thus execution count has asterisk symbol. When you use the -a option, you will get individual block counts, and the output looks like this: -: 0:Source:tmp.cpp -: 0:Working directory:/home/gcc/testcase -: 0:Graph:tmp.gcno -: 0:Data:tmp.gcda -: 0:Runs:1 -: 0:Programs:1 -: 1:#include <stdio.h> -: 2: -: 3:template<class T> -: 4:class Foo -: 5:{ -: 6: public: 1*: 7: Foo(): b (1000) {} ------------------ Foo<char>::Foo(): #####: 7: Foo(): b (1000) {} ------------------ Foo<int>::Foo(): 1: 7: Foo(): b (1000) {} ------------------ 2*: 8: void inc () { b++; } ------------------ Foo<char>::inc(): #####: 8: void inc () { b++; } ------------------ Foo<int>::inc(): 2: 8: void inc () { b++; } ------------------ -: 9: -: 10: private: -: 11: int b; -: 12:}; -: 13: -: 14:template class Foo<int>; -: 15:template class Foo<char>; -: 16: -: 17:int 1: 18:main (void) -: 19:{ -: 20: int i, total; 1: 21: Foo<int> counter; 1: 21-block 0 -: 22: 1: 23: counter.inc(); 1: 23-block 0 1: 24: counter.inc(); 1: 24-block 0 1: 25: total = 0; -: 26: 11: 27: for (i = 0; i < 10; i++) 1: 27-block 0 11: 27-block 1 10: 28: total += i; 10: 28-block 0 -: 29: 1*: 30: int v = total > 100 ? 1 : 2; 1: 30-block 0 %%%%%: 30-block 1 1: 30-block 2 -: 31: 1: 32: if (total != 45) 1: 32-block 0 #####: 33: printf ("Failure\n"); %%%%%: 33-block 0 -: 34: else 1: 35: printf ("Success\n"); 1: 35-block 0 1: 36: return 0; 1: 36-block 0 -: 37:} In this mode, each basic block is only shown on one line -- the last line of the block. A multi-line block will only contribute to the execution count of that last line, and other lines will not be shown to contain code, unless previous blocks end on those lines. The total execution count of a line is shown and subsequent lines show the execution counts for individual blocks that end on that line. After each block, the branch and call counts of the block will be shown, if the -b option is given. Because of the way GCC instruments calls, a call count can be shown after a line with no individual blocks. As you can see, line 33 contains a basic block that was not executed. When you use the -b option, your output looks like this: -: 0:Source:tmp.cpp -: 0:Working directory:/home/gcc/testcase -: 0:Graph:tmp.gcno -: 0:Data:tmp.gcda -: 0:Runs:1 -: 0:Programs:1 -: 1:#include <stdio.h> -: 2: -: 3:template<class T> -: 4:class Foo -: 5:{ -: 6: public: 1*: 7: Foo(): b (1000) {} ------------------ Foo<char>::Foo(): function Foo<char>::Foo() called 0 returned 0% blocks executed 0% #####: 7: Foo(): b (1000) {} ------------------ Foo<int>::Foo(): function Foo<int>::Foo() called 1 returned 100% blocks executed 100% 1: 7: Foo(): b (1000) {} ------------------ 2*: 8: void inc () { b++; } ------------------ Foo<char>::inc(): function Foo<char>::inc() called 0 returned 0% blocks executed 0% #####: 8: void inc () { b++; } ------------------ Foo<int>::inc(): function Foo<int>::inc() called 2 returned 100% blocks executed 100% 2: 8: void inc () { b++; } ------------------ -: 9: -: 10: private: -: 11: int b; -: 12:}; -: 13: -: 14:template class Foo<int>; -: 15:template class Foo<char>; -: 16: -: 17:int function main called 1 returned 100% blocks executed 81% 1: 18:main (void) -: 19:{ -: 20: int i, total; 1: 21: Foo<int> counter; call 0 returned 100% branch 1 taken 100% (fallthrough) branch 2 taken 0% (throw) -: 22: 1: 23: counter.inc(); call 0 returned 100% branch 1 taken 100% (fallthrough) branch 2 taken 0% (throw) 1: 24: counter.inc(); call 0 returned 100% branch 1 taken 100% (fallthrough) branch 2 taken 0% (throw) 1: 25: total = 0; -: 26: 11: 27: for (i = 0; i < 10; i++) branch 0 taken 91% (fallthrough) branch 1 taken 9% 10: 28: total += i; -: 29: 1*: 30: int v = total > 100 ? 1 : 2; branch 0 taken 0% (fallthrough) branch 1 taken 100% -: 31: 1: 32: if (total != 45) branch 0 taken 0% (fallthrough) branch 1 taken 100% #####: 33: printf ("Failure\n"); call 0 never executed branch 1 never executed branch 2 never executed -: 34: else 1: 35: printf ("Success\n"); call 0 returned 100% branch 1 taken 100% (fallthrough) branch 2 taken 0% (throw) 1: 36: return 0; -: 37:} For each function, a line is printed showing how many times the function is called, how many times it returns and what percentage of the function's blocks were executed. For each basic block, a line is printed after the last line of the basic block describing the branch or call that ends the basic block. There can be multiple branches and calls listed for a single source line if there are multiple basic blocks that end on that line. In this case, the branches and calls are each given a number. There is no simple way to map these branches and calls back to source constructs. In general, though, the lowest numbered branch or call will correspond to the leftmost construct on the source line. For a branch, if it was executed at least once, then a percentage indicating the number of times the branch was taken divided by the number of times the branch was executed will be printed. Otherwise, the message "never executed" is printed. For a call, if it was executed at least once, then a percentage indicating the number of times the call returned divided by the number of times the call was executed will be printed. This will usually be 100%, but may be less for functions that call "exit" or "longjmp", and thus may not return every time they are called. The execution counts are cumulative. If the example program were executed again without removing the .gcda file, the count for the number of times each line in the source was executed would be added to the results of the previous run(s). This is potentially useful in several ways. For example, it could be used to accumulate data over a number of program runs as part of a test verification suite, or to provide more accurate long-term information over a large number of program runs. The data in the .gcda files is saved immediately before the program exits. For each source file compiled with -fprofile-arcs, the profiling code first attempts to read in an existing .gcda file; if the file doesn't match the executable (differing number of basic block counts) it will ignore the contents of the file. It then adds in the new execution counts and finally writes the data to the file. Using gcov with GCC Optimization If you plan to use gcov to help optimize your code, you must first compile your program with a special GCC option --coverage. Aside from that, you can use any other GCC options; but if you want to prove that every single line in your program was executed, you should not compile with optimization at the same time. On some machines the optimizer can eliminate some simple code lines by combining them with other lines. For example, code like this: if (a != b) c = 1; else c = 0; can be compiled into one instruction on some machines. In this case, there is no way for gcov to calculate separate execution counts for each line because there isn't separate code for each line. Hence the gcov output looks like this if you compiled the program with optimization: 100: 12:if (a != b) 100: 13: c = 1; 100: 14:else 100: 15: c = 0; The output shows that this block of code, combined by optimization, executed 100 times. In one sense this result is correct, because there was only one instruction representing all four of these lines. However, the output does not indicate how many times the result was 0 and how many times the result was 1. Inlineable functions can create unexpected line counts. Line counts are shown for the source code of the inlineable function, but what is shown depends on where the function is inlined, or if it is not inlined at all. If the function is not inlined, the compiler must emit an out of line copy of the function, in any object file that needs it. If fileA.o and fileB.o both contain out of line bodies of a particular inlineable function, they will also both contain coverage counts for that function. When fileA.o and fileB.o are linked together, the linker will, on many systems, select one of those out of line bodies for all calls to that function, and remove or ignore the other. Unfortunately, it will not remove the coverage counters for the unused function body. Hence when instrumented, all but one use of that function will show zero counts. If the function is inlined in several places, the block structure in each location might not be the same. For instance, a condition might now be calculable at compile time in some instances. Because the coverage of all the uses of the inline function will be shown for the same source lines, the line counts themselves might seem inconsistent. Long-running applications can use the "__gcov_reset" and "__gcov_dump" facilities to restrict profile collection to the program region of interest. Calling "__gcov_reset(void)" will clear all profile counters to zero, and calling "__gcov_dump(void)" will cause the profile information collected at that point to be dumped to .gcda output files. Instrumented applications use a static destructor with priority 99 to invoke the "__gcov_dump" function. Thus "__gcov_dump" is executed after all user defined static destructors, as well as handlers registered with "atexit". If an executable loads a dynamic shared object via dlopen functionality, -Wl,--dynamic-list-data is needed to dump all profile data. Profiling run-time library reports various errors related to profile manipulation and profile saving. Errors are printed into standard error output or GCOV_ERROR_FILE file, if environment variable is used. In order to terminate immediately after an errors occurs set GCOV_EXIT_AT_ERROR environment variable. That can help users to find profile clashing which leads to a misleading profile. SEE ALSO top gpl(7), gfdl(7), fsf-funding(7), gcc(1) and the Info entry for gcc. COPYRIGHT top Copyright (c) 1996-2019 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being "GNU General Public License" and "Funding Free Software", the Front-Cover texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the gfdl(7) man page. (a) The FSF's Front-Cover Text is: A GNU Manual (b) The FSF's Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. COLOPHON top This page is part of the gcc (GNU Compiler Collection) project. Information about the project can be found at http://gcc.gnu.org/. If you have a bug report for this manual page, see http://gcc.gnu.org/bugs/. This page was obtained from the tarball gcc-9.5.0.tar.xz fetched from ftp://ftp.gwdg.de/pub/misc/gcc/releases/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org gcc-9.5.0 2022-05-27 GCOV(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# gcov\n\n> Code coverage analysis and profiling tool that discovers untested parts of a program.\n> Also displays a copy of source code annotated with execution frequencies of code segments.\n> More information: <https://gcc.gnu.org/onlinedocs/gcc/Invoking-Gcov.html>.\n\n- Generate a coverage report named `file.cpp.gcov`:\n\n`gcov {{path/to/file.cpp}}`\n\n- Write individual execution counts for every basic block:\n\n`gcov --all-blocks {{path/to/file.cpp}}`\n\n- Write branch frequencies to the output file and print summary information to `stdout` as a percentage:\n\n`gcov --branch-probabilities {{path/to/file.cpp}}`\n\n- Write branch frequencies as the number of branches taken, rather than the percentage:\n\n`gcov --branch-counts {{path/to/file.cpp}}`\n\n- Do not create a `gcov` output file:\n\n`gcov --no-output {{path/to/file.cpp}}`\n\n- Write file level as well as function level summaries:\n\n`gcov --function-summaries {{path/to/file.cpp}}`\n
gdb
gdb(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training gdb(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON GDB(1) GNU Development Tools GDB(1) NAME top gdb - The GNU Debugger SYNOPSIS top gdb [OPTIONS] [prog|prog procID|prog core] DESCRIPTION top The purpose of a debugger such as GDB is to allow you to see what is going on "inside" another program while it executes -- or what another program was doing at the moment it crashed. GDB can do four main kinds of things (plus other things in support of these) to help you catch bugs in the act: Start your program, specifying anything that might affect its behavior. Make your program stop on specified conditions. Examine what has happened, when your program has stopped. Change things in your program, so you can experiment with correcting the effects of one bug and go on to learn about another. You can use GDB to debug programs written in C, C++, Fortran and Modula-2. GDB is invoked with the shell command "gdb". Once started, it reads commands from the terminal until you tell it to exit with the GDB command "quit" or "exit". You can get online help from GDB itself by using the command "help". You can run "gdb" with no arguments or options; but the most usual way to start GDB is with one argument or two, specifying an executable program as the argument: gdb program You can also start with both an executable program and a core file specified: gdb program core You can, instead, specify a process ID as a second argument or use option "-p", if you want to debug a running process: gdb program 1234 gdb -p 1234 would attach GDB to process 1234. With option -p you can omit the program filename. Here are some of the most frequently needed GDB commands: break [file:][function|line] Set a breakpoint at function or line (in file). run [arglist] Start your program (with arglist, if specified). bt Backtrace: display the program stack. print expr Display the value of an expression. c Continue running your program (after stopping, e.g. at a breakpoint). next Execute next program line (after stopping); step over any function calls in the line. edit [file:]function look at the program line where it is presently stopped. list [file:]function type the text of the program in the vicinity of where it is presently stopped. step Execute next program line (after stopping); step into any function calls in the line. help [name] Show information about GDB command name, or general information about using GDB. quit exit Exit from GDB. For full details on GDB, see Using GDB: A Guide to the GNU Source-Level Debugger, by Richard M. Stallman and Roland H. Pesch. The same text is available online as the "gdb" entry in the "info" program. OPTIONS top Any arguments other than options specify an executable file and core file (or process ID); that is, the first argument encountered with no associated option flag is equivalent to a --se option, and the second, if any, is equivalent to a -c option if it's the name of a file. Many options have both long and abbreviated forms; both are shown here. The long forms are also recognized if you truncate them, so long as enough of the option is present to be unambiguous. The abbreviated forms are shown here with - and long forms are shown with -- to reflect how they are shown in --help. However, GDB recognizes all of the following conventions for most options: "--option=value" "--option value" "-option=value" "-option value" "--o=value" "--o value" "-o=value" "-o value" All the options and command line arguments you give are processed in sequential order. The order makes a difference when the -x option is used. --help -h List all options, with brief explanations. --symbols=file -s file Read symbol table from file. --write Enable writing into executable and core files. --exec=file -e file Use file as the executable file to execute when appropriate, and for examining pure data in conjunction with a core dump. --se=file Read symbol table from file and use it as the executable file. --core=file -c file Use file as a core dump to examine. --command=file -x file Execute GDB commands from file. --eval-command=command -ex command Execute given GDB command. --init-eval-command=command -iex Execute GDB command before loading the inferior. --directory=directory -d directory Add directory to the path to search for source files. --nh Do not execute commands from ~/.config/gdb/gdbinit, ~/.gdbinit, ~/.config/gdb/gdbearlyinit, or ~/.gdbearlyinit --nx -n Do not execute commands from any .gdbinit or .gdbearlyinit initialization files. --quiet --silent -q "Quiet". Do not print the introductory and copyright messages. These messages are also suppressed in batch mode. --batch Run in batch mode. Exit with status 0 after processing all the command files specified with -x (and .gdbinit, if not inhibited). Exit with nonzero status if an error occurs in executing the GDB commands in the command files. Batch mode may be useful for running GDB as a filter, for example to download and run a program on another computer; in order to make this more useful, the message Program exited normally. (which is ordinarily issued whenever a program running under GDB control terminates) is not issued when running in batch mode. --batch-silent Run in batch mode, just like --batch, but totally silent. All GDB output is suppressed (stderr is unaffected). This is much quieter than --silent and would be useless for an interactive session. This is particularly useful when using targets that give Loading section messages, for example. Note that targets that give their output via GDB, as opposed to writing directly to "stdout", will also be made silent. --args prog [arglist] Change interpretation of command line so that arguments following this option are passed as arguments to the inferior. As an example, take the following command: gdb ./a.out -q It would start GDB with -q, not printing the introductory message. On the other hand, using: gdb --args ./a.out -q starts GDB with the introductory message, and passes the option to the inferior. --pid=pid Attach GDB to an already running program, with the PID pid. --tui Open the terminal user interface. --readnow Read all symbols from the given symfile on the first access. --readnever Do not read symbol files. --return-child-result GDB's exit code will be the same as the child's exit code. --configuration Print details about GDB configuration and then exit. --version Print version information and then exit. --cd=directory Run GDB using directory as its working directory, instead of the current directory. --data-directory=directory -D Run GDB using directory as its data directory. The data directory is where GDB searches for its auxiliary files. --fullname -f Emacs sets this option when it runs GDB as a subprocess. It tells GDB to output the full file name and line number in a standard, recognizable fashion each time a stack frame is displayed (which includes each time the program stops). This recognizable format looks like two \032 characters, followed by the file name, line number and character position separated by colons, and a newline. The Emacs-to-GDB interface program uses the two \032 characters as a signal to display the source code for the frame. -b baudrate Set the line speed (baud rate or bits per second) of any serial interface used by GDB for remote debugging. -l timeout Set timeout, in seconds, for remote debugging. --tty=device Run using device for your program's standard input and output. SEE ALSO top The full documentation for GDB is maintained as a Texinfo manual. If the "info" and "gdb" programs and GDB's Texinfo documentation are properly installed at your site, the command info gdb should give you access to the complete manual. Using GDB: A Guide to the GNU Source-Level Debugger, Richard M. Stallman and Roland H. Pesch, July 1991. COPYRIGHT top Copyright (c) 1988-2023 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being "Free Software" and "Free Software Needs Free Documentation", with the Front-Cover Texts being "A GNU Manual," and with the Back-Cover Texts as in (a) below. (a) The FSF's Back-Cover Text is: "You are free to copy and modify this GNU Manual. Buying copies from GNU Press supports the FSF in developing GNU and promoting software freedom." COLOPHON top This page is part of the gdb (GNU debugger) project. Information about the project can be found at http://www.gnu.org/software/gdb/. If you have a bug report for this manual page, see http://www.gnu.org/software/gdb/bugs/. This page was obtained from the tarball gdb-14.1.tar.gz fetched from https://ftp.gnu.org/gnu/gdb/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org gdb-14.1 2023-12-03 GDB(1) Pages that refer to this page: coredumpctl(1), dbpmda(1), pldd(1), pmdbg(1), stap(1), stap-merge(1), ptrace(2), abort(3), backtrace(3), core(5), elf(5), gdbinit(5), proc(5), stappaths(7), crash(8), systemd-coredump(8), systemd-sysext(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# gdb\n\n> The GNU Debugger.\n> More information: <https://www.gnu.org/software/gdb>.\n\n- Debug an executable:\n\n`gdb {{executable}}`\n\n- Attach a process to gdb:\n\n`gdb -p {{procID}}`\n\n- Debug with a core file:\n\n`gdb -c {{core}} {{executable}}`\n\n- Execute given GDB commands upon start:\n\n`gdb -ex "{{commands}}" {{executable}}`\n\n- Start `gdb` and pass arguments to the executable:\n\n`gdb --args {{executable}} {{argument1}} {{argument2}}`\n
getcap
getcap(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training getcap(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | REPORTING BUGS | SEE ALSO | COLOPHON GETCAP(8) System Manager's Manual GETCAP(8) NAME top getcap - examine file capabilities SYNOPSIS top getcap [-v] [-n] [-r] [-h] filename [ ... ] DESCRIPTION top getcap displays the name and capabilities of each specified file. OPTIONS top -h prints quick usage. -n prints any non-zero user namespace root user ID value found to be associated with a file's capabilities. -r enables recursive search. -v display all searched entries, even if the have no file- capabilities. NOTE: an empty value of '=' is not equivalent to an omitted (or removed) capability on a file. This is most significant with respect to the Ambient capability vector, since a process with Ambient capabilities will lose them when executing a file having '=' capabilities, but will retain the Ambient inheritance of privilege when executing a file with an omitted file capability. This special empty setting can be used to prevent a binary from executing with privilege. For some time, the kernel honored this suppression for root executing the file, but the kernel developers decided after a number of years that this behavior was unexpected for the superuser and reverted it just for that user identity. Suppression of root privilege, for a process tree, is possible, using the capsh(1) --mode option. filename One file per line. REPORTING BUGS top Please report bugs via: https://bugzilla.kernel.org/buglist.cgi?component=libcap&list_id=1090757 SEE ALSO top capsh(1), cap_get_file(3), cap_to_text(3), capabilities(7), user_namespaces(7), captree(8), getpcaps(8) and setcap(8). COLOPHON top This page is part of the libcap (capabilities commands and library) project. Information about the project can be found at https://git.kernel.org/pub/scm/libs/libcap/libcap.git/. If you have a bug report for this manual page, send it to morgan@kernel.org (please put "libcap" in the Subject line). This page was obtained from the project's upstream Git repository https://git.kernel.org/pub/scm/libs/libcap/libcap.git/ on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-06-24.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 2021-08-29 GETCAP(8) Pages that refer to this page: capsh(1), libcap(3), capabilities(7), getpcaps(8), setcap(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# getcap\n\n> Command to display the name and capabilities of each specified file.\n> More information: <https://manned.org/getcap>.\n\n- Get capabilities for the given files:\n\n`getcap {{path/to/file1 path/to/file2 ...}}`\n\n- Get capabilities for all the files recursively under the given directories:\n\n`getcap -r {{path/to/directory1 path/to/directory2 ...}}`\n\n- Displays all searched entries even if no capabilities are set:\n\n`getcap -v {{path/to/file1 path/to/file2 ...}}`\n
getconf
getconf(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training getconf(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT GETCONF(1P) POSIX Programmer's Manual GETCONF(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top getconf get configuration values SYNOPSIS top getconf [-v specification] system_var getconf [-v specification] path_var pathname DESCRIPTION top In the first synopsis form, the getconf utility shall write to the standard output the value of the variable specified by the system_var operand. In the second synopsis form, the getconf utility shall write to the standard output the value of the variable specified by the path_var operand for the path specified by the pathname operand. The value of each configuration variable shall be determined as if it were obtained by calling the function from which it is defined to be available by this volume of POSIX.12017 or by the System Interfaces volume of POSIX.12017 (see the OPERANDS section). The value shall reflect conditions in the current operating environment. OPTIONS top The getconf utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -v specification Indicate a specific specification and version for which configuration variables shall be determined. If this option is not specified, the values returned correspond to an implementation default conforming compilation environment. If the command: getconf _POSIX_V7_ILP32_OFF32 does not write "-1\n" or "undefined\n" to standard output, then commands of the form: getconf -v POSIX_V7_ILP32_OFF32 ... determine values for configuration variables corresponding to the POSIX_V7_ILP32_OFF32 compilation environment specified in c99(1p), the EXTENDED DESCRIPTION. If the command: getconf _POSIX_V7_ILP32_OFFBIG does not write "-1\n" or "undefined\n" to standard output, then commands of the form: getconf -v POSIX_V7_ILP32_OFFBIG ... determine values for configuration variables corresponding to the POSIX_V7_ILP32_OFFBIG compilation environment specified in c99(1p), the EXTENDED DESCRIPTION. If the command: getconf _POSIX_V7_LP64_OFF64 does not write "-1\n" or "undefined\n" to standard output, then commands of the form: getconf -v POSIX_V7_LP64_OFF64 ... determine values for configuration variables corresponding to the POSIX_V7_LP64_OFF64 compilation environment specified in c99(1p), the EXTENDED DESCRIPTION. If the command: getconf _POSIX_V7_LPBIG_OFFBIG does not write "-1\n" or "undefined\n" to standard output, then commands of the form: getconf -v POSIX_V7_LPBIG_OFFBIG ... determine values for configuration variables corresponding to the POSIX_V7_LPBIG_OFFBIG compilation environment specified in c99(1p), the EXTENDED DESCRIPTION. OPERANDS top The following operands shall be supported: path_var A name of a configuration variable. All of the variables in the Variable column of the table in the DESCRIPTION of the fpathconf() function defined in the System Interfaces volume of POSIX.12017, without the enclosing braces, shall be supported. The implementation may add other local variables. pathname A pathname for which the variable specified by path_var is to be determined. system_var A name of a configuration variable. All of the following variables shall be supported: * The names in the Variable column of the table in the DESCRIPTION of the sysconf() function in the System Interfaces volume of POSIX.12017, except for the entries corresponding to _SC_CLK_TCK, _SC_GETGR_R_SIZE_MAX, and _SC_GETPW_R_SIZE_MAX, without the enclosing braces. For compatibility with earlier versions, the following variable names shall also be supported: POSIX2_C_BIND POSIX2_C_DEV POSIX2_CHAR_TERM POSIX2_FORT_DEV POSIX2_FORT_RUN POSIX2_LOCALEDEF POSIX2_SW_DEV POSIX2_UPE POSIX2_VERSION and shall be equivalent to the same name prefixed with an <underscore>. This requirement may be removed in a future version. * The names of the symbolic constants used as the name argument of the confstr() function in the System Interfaces volume of POSIX.12017, without the _CS_ prefix. * The names of the symbolic constants listed under the headings ``Maximum Values'' and ``Minimum Values'' in the description of the <limits.h> header in the Base Definitions volume of POSIX.12017, without the enclosing braces. For compatibility with earlier versions, the following variable names shall also be supported: POSIX2_BC_BASE_MAX POSIX2_BC_DIM_MAX POSIX2_BC_SCALE_MAX POSIX2_BC_STRING_MAX POSIX2_COLL_WEIGHTS_MAX POSIX2_EXPR_NEST_MAX POSIX2_LINE_MAX POSIX2_RE_DUP_MAX and shall be equivalent to the same name prefixed with an <underscore>. This requirement may be removed in a future version. The implementation may add other local values. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of getconf: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top Default. STDOUT top If the specified variable is defined on the system and its value is described to be available from the confstr() function defined in the System Interfaces volume of POSIX.12017, its value shall be written in the following format: "%s\n", <value> Otherwise, if the specified variable is defined on the system, its value shall be written in the following format: "%d\n", <value> If the specified variable is valid, but is undefined on the system, getconf shall write using the following format: "undefined\n" If the variable name is invalid or an error occurs, nothing shall be written to standard output. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 The specified variable is valid and information about its current state was written successfully. >0 An error occurred. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top None. EXAMPLES top The following example illustrates the value of {NGROUPS_MAX}: getconf NGROUPS_MAX The following example illustrates the value of {NAME_MAX} for a specific directory: getconf NAME_MAX /usr The following example shows how to deal more carefully with results that might be unspecified: if value=$(getconf PATH_MAX /usr); then if [ "$value" = "undefined" ]; then echo PATH_MAX in /usr is indeterminate. else echo PATH_MAX in /usr is $value. fi else echo Error in getconf. fi RATIONALE top The original need for this utility, and for the confstr() function, was to provide a way of finding the configuration- defined default value for the PATH environment variable. Since PATH can be modified by the user to include directories that could contain utilities replacing the standard utilities, shell scripts need a way to determine the system-supplied PATH environment variable value that contains the correct search path for the standard utilities. It was later suggested that access to the other variables described in this volume of POSIX.12017 could also be useful to applications. This functionality of getconf would not be adequately subsumed by another command such as: grep var /etc/conf because such a strategy would provide correct values for neither those variables that can vary at runtime, nor those that can vary depending on the path. Early proposal versions of getconf specified exit status 1 when the specified variable was valid, but not defined on the system. The output string "undefined" is now used to specify this case with exit code 0 because so many things depend on an exit code of zero when an invoked utility is successful. FUTURE DIRECTIONS top None. SEE ALSO top c99(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines, limits.h(0p) The System Interfaces volume of POSIX.12017, confstr(3p), fpathconf(3p), sysconf(3p), system(3p) COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 GETCONF(1P) Pages that refer to this page: poll.h(0p), stddef.h(0p), sys_types.h(0p), termios.h(0p), wchar.h(0p), c99(1p), fincore(1), fpathconf(3p), sysconf(3p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# getconf\n\n> Get configuration values from your Linux system.\n> More information: <https://manned.org/getconf.1>.\n\n- List [a]ll configuration values available:\n\n`getconf -a`\n\n- List the configuration values for a specific directory:\n\n`getconf -a {{path/to/directory}}`\n\n- Check if the system is 32-bit or 64-bit:\n\n`getconf LONG_BIT`\n\n- Check how many processes the current user can run at once:\n\n`getconf CHILD_MAX`\n\n- List every configuration value and then find patterns with the `grep` command (i.e every value with MAX in it):\n\n`getconf -a | grep MAX`\n
getent
getent(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training getent(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO getent(1) General Commands Manual getent(1) NAME top getent - get entries from Name Service Switch libraries SYNOPSIS top getent [option]... database key... DESCRIPTION top The getent command displays entries from databases supported by the Name Service Switch libraries, which are configured in /etc/nsswitch.conf. If one or more key arguments are provided, then only the entries that match the supplied keys will be displayed. Otherwise, if no key is provided, all entries will be displayed (unless the database does not support enumeration). The database may be any of those supported by the GNU C Library, listed below: ahosts When no key is provided, use sethostent(3), gethostent(3), and endhostent(3) to enumerate the hosts database. This is identical to using hosts. When one or more key arguments are provided, pass each key in succession to getaddrinfo(3) with the address family AF_UNSPEC, enumerating each socket address structure returned. ahostsv4 Same as ahosts, but use the address family AF_INET. ahostsv6 Same as ahosts, but use the address family AF_INET6. The call to getaddrinfo(3) in this case includes the AI_V4MAPPED flag. aliases When no key is provided, use setaliasent(3), getaliasent(3), and endaliasent(3) to enumerate the aliases database. When one or more key arguments are provided, pass each key in succession to getaliasbyname(3) and display the result. ethers When one or more key arguments are provided, pass each key in succession to ether_aton(3) and ether_hostton(3) until a result is obtained, and display the result. Enumeration is not supported on ethers, so a key must be provided. group When no key is provided, use setgrent(3), getgrent(3), and endgrent(3) to enumerate the group database. When one or more key arguments are provided, pass each numeric key to getgrgid(3) and each nonnumeric key to getgrnam(3) and display the result. gshadow When no key is provided, use setsgent(3), getsgent(3), and endsgent(3) to enumerate the gshadow database. When one or more key arguments are provided, pass each key in succession to getsgnam(3) and display the result. hosts When no key is provided, use sethostent(3), gethostent(3), and endhostent(3) to enumerate the hosts database. When one or more key arguments are provided, pass each key to gethostbyaddr(3) or gethostbyname2(3), depending on whether a call to inet_pton(3) indicates that the key is an IPv6 or IPv4 address or not, and display the result. initgroups When one or more key arguments are provided, pass each key in succession to getgrouplist(3) and display the result. Enumeration is not supported on initgroups, so a key must be provided. netgroup When one key is provided, pass the key to setnetgrent(3) and, using getnetgrent(3) display the resulting string triple (hostname, username, domainname). Alternatively, three keys may be provided, which are interpreted as the hostname, username, and domainname to match to a netgroup name via innetgr(3). Enumeration is not supported on netgroup, so either one or three keys must be provided. networks When no key is provided, use setnetent(3), getnetent(3), and endnetent(3) to enumerate the networks database. When one or more key arguments are provided, pass each numeric key to getnetbyaddr(3) and each nonnumeric key to getnetbyname(3) and display the result. passwd When no key is provided, use setpwent(3), getpwent(3), and endpwent(3) to enumerate the passwd database. When one or more key arguments are provided, pass each numeric key to getpwuid(3) and each nonnumeric key to getpwnam(3) and display the result. protocols When no key is provided, use setprotoent(3), getprotoent(3), and endprotoent(3) to enumerate the protocols database. When one or more key arguments are provided, pass each numeric key to getprotobynumber(3) and each nonnumeric key to getprotobyname(3) and display the result. rpc When no key is provided, use setrpcent(3), getrpcent(3), and endrpcent(3) to enumerate the rpc database. When one or more key arguments are provided, pass each numeric key to getrpcbynumber(3) and each nonnumeric key to getrpcbyname(3) and display the result. services When no key is provided, use setservent(3), getservent(3), and endservent(3) to enumerate the services database. When one or more key arguments are provided, pass each numeric key to getservbynumber(3) and each nonnumeric key to getservbyname(3) and display the result. shadow When no key is provided, use setspent(3), getspent(3), and endspent(3) to enumerate the shadow database. When one or more key arguments are provided, pass each key in succession to getspnam(3) and display the result. OPTIONS top -s service, --service service Override all databases with the specified service. (Since glibc 2.2.5.) -s database:service, --service database:service Override only specified databases with the specified service. The option may be used multiple times, but only the last service for each database will be used. (Since glibc 2.4.) -i, --no-idn Disables IDN encoding in lookups for ahosts/getaddrinfo(3) (Since glibc-2.13.) -?, --help Print a usage summary and exit. --usage Print a short usage summary and exit. -V, --version Print the version number, license, and disclaimer of warranty for getent. EXIT STATUS top One of the following exit values can be returned by getent: 0 Command completed successfully. 1 Missing arguments, or database unknown. 2 One or more supplied key could not be found in the database. 3 Enumeration not supported on this database. SEE ALSO top nsswitch.conf(5) Linux man-pages (unreleased) (date) getent(1) Pages that refer to this page: groups(1), homectl(1), userdbctl(1), users(1), nsswitch.conf(5), passwd(5@@shadow-utils), nss-myhostname(8), nss-mymachines(8), nss-systemd(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# getent\n\n> Get entries from Name Service Switch libraries.\n> More information: <https://manned.org/getent>.\n\n- Get list of all groups:\n\n`getent group`\n\n- See the members of a group:\n\n`getent group {{group_name}}`\n\n- Get list of all services:\n\n`getent services`\n\n- Find a username by UID:\n\n`getent passwd 1000`\n\n- Perform a reverse DNS lookup:\n\n`getent hosts {{host}}`\n
getfacl
getfacl(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training getfacl(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFORMANCE TO POSIX 1003.1e DRAFT STANDARD 17 | AUTHOR | SEE ALSO | COLOPHON GETFACL(1) Access Control Lists GETFACL(1) NAME top getfacl - get file access control lists SYNOPSIS top getfacl [-aceEsRLPtpndvh] file ... getfacl [-aceEsRLPtpndvh] - DESCRIPTION top For each file, getfacl displays the file name, owner, the group, and the Access Control List (ACL). If a directory has a default ACL, getfacl also displays the default ACL. Non-directories cannot have default ACLs. If getfacl is used on a file system that does not support ACLs, getfacl displays the access permissions defined by the traditional file mode permission bits. The output format of getfacl is as follows: 1: # file: somedir/ 2: # owner: lisa 3: # group: staff 4: # flags: -s- 5: user::rwx 6: user:joe:rwx #effective:r-x 7: group::rwx #effective:r-x 8: group:cool:r-x 9: mask::r-x 10: other::r-x 11: default:user::rwx 12: default:user:joe:rwx #effective:r-x 13: default:group::r-x 14: default:mask::r-x 15: default:other::--- Lines 1--3 indicate the file name, owner, and owning group. Line 4 indicates the setuid (s), setgid (s), and sticky (t) bits: either the letter representing the bit, or else a dash (-). This line is included if any of those bits is set and left out otherwise, so it will not be shown for most files. (See CONFORMANCE TO POSIX 1003.1e DRAFT STANDARD 17 below.) Lines 5, 7 and 10 correspond to the user, group and other fields of the file mode permission bits. These three are called the base ACL entries. Lines 6 and 8 are named user and named group entries. Line 9 is the effective rights mask. This entry limits the effective rights granted to all groups and to named users. (The file owner and others permissions are not affected by the effective rights mask; all other entries are.) Lines 11--15 display the default ACL associated with this directory. Directories may have a default ACL. Regular files never have a default ACL. The default behavior for getfacl is to display both the ACL and the default ACL, and to include an effective rights comment for lines where the rights of the entry differ from the effective rights. If output is to a terminal, the effective rights comment is aligned to column 40. Otherwise, a single tab character separates the ACL entry and the effective rights comment. The ACL listings of multiple files are separated by blank lines. The output of getfacl can also be used as input to setfacl. PERMISSIONS Process with search access to a file (i.e., processes with read access to the containing directory of a file) are also granted read access to the file's ACLs. This is analogous to the permissions required for accessing the file mode. OPTIONS top -a, --access Display the file access control list. -d, --default Display the default access control list. -c, --omit-header Do not display the comment header (the first three lines of each file's output). -e, --all-effective Print all effective rights comments, even if identical to the rights defined by the ACL entry. -E, --no-effective Do not print effective rights comments. -s, --skip-base Skip files that only have the base ACL entries (owner, group, others). -R, --recursive List the ACLs of all files and directories recursively. -L, --logical Logical walk, follow symbolic links to directories. The default behavior is to follow symbolic link arguments, and skip symbolic links encountered in subdirectories. Only effective in combination with -R. -P, --physical Physical walk, do not follow symbolic links to directories. This also skips symbolic link arguments. Only effective in combination with -R. -t, --tabular Use an alternative tabular output format. The ACL and the default ACL are displayed side by side. Permissions that are ineffective due to the ACL mask entry are displayed capitalized. The entry tag names for the ACL_USER_OBJ and ACL_GROUP_OBJ entries are also displayed in capital letters, which helps in spotting those entries. -p, --absolute-names Do not strip leading slash characters (`/'). The default behavior is to strip leading slash characters. -n, --numeric List numeric user and group IDs -v, --version Print the version of getfacl and exit. -h, --help Print help explaining the command line options. -- End of command line options. All remaining parameters are interpreted as file names, even if they start with a dash character. - If the file name parameter is a single dash character, getfacl reads a list of files from standard input. CONFORMANCE TO POSIX 1003.1e DRAFT STANDARD 17 top If the environment variable POSIXLY_CORRECT is defined, the default behavior of getfacl changes in the following ways: Unless otherwise specified, only the ACL is printed. The default ACL is only printed if the -d option is given. If no command line parameter is given, getfacl behaves as if it was invoked as ``getfacl -''. No flags comments indicating the setuid, setgid, and sticky bits are generated. AUTHOR top Andreas Gruenbacher, <andreas.gruenbacher@gmail.com>. Please send your bug reports and comments to the above address. SEE ALSO top setfacl(1), acl(5) COLOPHON top This page is part of the acl (manipulating access control lists) project. Information about the project can be found at http://savannah.nongnu.org/projects/acl. If you have a bug report for this manual page, see http://savannah.nongnu.org/bugs/?group=acl. This page was obtained from the project's upstream Git repository git://git.savannah.nongnu.org/acl.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-01.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org May 2000 ACL File Utilities GETFACL(1) Pages that refer to this page: chacl(1), setfacl(1), tmpfiles.d(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# getfacl\n\n> Get file access control lists (ACL).\n> More information: <https://manned.org/getfacl>.\n\n- Display the file access control list:\n\n`getfacl {{path/to/file_or_directory}}`\n\n- Display the file access control list with [n]umeric user and group IDs:\n\n`getfacl --numeric {{path/to/file_or_directory}}`\n\n- Display the file access control list with [t]abular output format:\n\n`getfacl --tabular {{path/to/file_or_directory}}`\n
getopt
getopt(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training getopt(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | PARSING | OUTPUT | QUOTING | SCANNING MODES | COMPATIBILITY | RETURN CODES | EXAMPLES | ENVIRONMENT | BUGS | AUTHOR | SEE ALSO | REPORTING BUGS | AVAILABILITY GETOPT(1) User Commands GETOPT(1) NAME top getopt - parse command options (enhanced) SYNOPSIS top getopt optstring parameters getopt [options] [--] optstring parameters getopt [options] -o|--options optstring [options] [--] parameters DESCRIPTION top getopt is used to break up (parse) options in command lines for easy parsing by shell procedures, and to check for valid options. It uses the GNU getopt(3) routines to do this. The parameters getopt is called with can be divided into two parts: options which modify the way getopt will do the parsing (the options and the optstring in the SYNOPSIS), and the parameters which are to be parsed (parameters in the SYNOPSIS). The second part will start at the first non-option parameter that is not an option argument, or after the first occurrence of '--'. If no '-o' or '--options' option is found in the first part, the first parameter of the second part is used as the short options string. If the environment variable GETOPT_COMPATIBLE is set, or if the first parameter is not an option (does not start with a '-', the first format in the SYNOPSIS), getopt will generate output that is compatible with that of other versions of getopt(1). It will still do parameter shuffling and recognize optional arguments (see the COMPATIBILITY section for more information). Traditional implementations of getopt(1) are unable to cope with whitespace and other (shell-specific) special characters in arguments and non-option parameters. To solve this problem, this implementation can generate quoted output which must once again be interpreted by the shell (usually by using the eval command). This has the effect of preserving those characters, but you must call getopt in a way that is no longer compatible with other versions (the second or third format in the SYNOPSIS). To determine whether this enhanced version of getopt(1) is installed, a special test option (-T) can be used. OPTIONS top -a, --alternative Allow long options to start with a single '-'. -l, --longoptions longopts The long (multi-character) options to be recognized. More than one option name may be specified at once, by separating the names with commas. This option may be given more than once, the longopts are cumulative. Each long option name in longopts may be followed by one colon to indicate it has a required argument, and by two colons to indicate it has an optional argument. -n, --name progname The name that will be used by the getopt(3) routines when it reports errors. Note that errors of getopt(1) are still reported as coming from getopt. -o, --options shortopts The short (one-character) options to be recognized. If this option is not found, the first parameter of getopt that does not start with a '-' (and is not an option argument) is used as the short options string. Each short option character in shortopts may be followed by one colon to indicate it has a required argument, and by two colons to indicate it has an optional argument. The first character of shortopts may be '+' or '-' to influence the way options are parsed and output is generated (see the SCANNING MODES section for details). -q, --quiet Disable error reporting by getopt(3). -Q, --quiet-output Do not generate normal output. Errors are still reported by getopt(3), unless you also use -q. -s, --shell shell Set quoting conventions to those of shell. If the -s option is not given, the BASH conventions are used. Valid arguments are currently 'sh', 'bash', 'csh', and 'tcsh'. -T, --test Test if your getopt(1) is this enhanced version or an old version. This generates no output, and sets the error status to 4. Other implementations of getopt(1), and this version if the environment variable GETOPT_COMPATIBLE is set, will return '--' and error status 0. -u, --unquoted Do not quote the output. Note that whitespace and special (shell-dependent) characters can cause havoc in this mode (like they do with other getopt(1) implementations). -h, --help Display help text and exit. -V, --version Print version and exit. PARSING top This section specifies the format of the second part of the parameters of getopt (the parameters in the SYNOPSIS). The next section (OUTPUT) describes the output that is generated. These parameters were typically the parameters a shell function was called with. Care must be taken that each parameter the shell function was called with corresponds to exactly one parameter in the parameter list of getopt (see the EXAMPLES). All parsing is done by the GNU getopt(3) routines. The parameters are parsed from left to right. Each parameter is classified as a short option, a long option, an argument to an option, or a non-option parameter. A simple short option is a '-' followed by a short option character. If the option has a required argument, it may be written directly after the option character or as the next parameter (i.e., separated by whitespace on the command line). If the option has an optional argument, it must be written directly after the option character if present. It is possible to specify several short options after one '-', as long as all (except possibly the last) do not have required or optional arguments. A long option normally begins with '--' followed by the long option name. If the option has a required argument, it may be written directly after the long option name, separated by '=', or as the next argument (i.e., separated by whitespace on the command line). If the option has an optional argument, it must be written directly after the long option name, separated by '=', if present (if you add the '=' but nothing behind it, it is interpreted as if no argument was present; this is a slight bug, see the BUGS). Long options may be abbreviated, as long as the abbreviation is not ambiguous. Each parameter not starting with a '-', and not a required argument of a previous option, is a non-option parameter. Each parameter after a '--' parameter is always interpreted as a non-option parameter. If the environment variable POSIXLY_CORRECT is set, or if the short option string started with a '+', all remaining parameters are interpreted as non-option parameters as soon as the first non-option parameter is found. OUTPUT top Output is generated for each element described in the previous section. Output is done in the same order as the elements are specified in the input, except for non-option parameters. Output can be done in compatible (unquoted) mode, or in such way that whitespace and other special characters within arguments and non-option parameters are preserved (see QUOTING). When the output is processed in the shell script, it will seem to be composed of distinct elements that can be processed one by one (by using the shift command in most shell languages). This is imperfect in unquoted mode, as elements can be split at unexpected places if they contain whitespace or special characters. If there are problems parsing the parameters, for example because a required argument is not found or an option is not recognized, an error will be reported on stderr, there will be no output for the offending element, and a non-zero error status is returned. For a short option, a single '-' and the option character are generated as one parameter. If the option has an argument, the next parameter will be the argument. If the option takes an optional argument, but none was found, the next parameter will be generated but be empty in quoting mode, but no second parameter will be generated in unquoted (compatible) mode. Note that many other getopt(1) implementations do not support optional arguments. If several short options were specified after a single '-', each will be present in the output as a separate parameter. For a long option, '--' and the full option name are generated as one parameter. This is done regardless whether the option was abbreviated or specified with a single '-' in the input. Arguments are handled as with short options. Normally, no non-option parameters output is generated until all options and their arguments have been generated. Then '--' is generated as a single parameter, and after it the non-option parameters in the order they were found, each as a separate parameter. Only if the first character of the short options string was a '-', non-option parameter output is generated at the place they are found in the input (this is not supported if the first format of the SYNOPSIS is used; in that case all preceding occurrences of '-' and '+' are ignored). QUOTING top In compatibility mode, whitespace or 'special' characters in arguments or non-option parameters are not handled correctly. As the output is fed to the shell script, the script does not know how it is supposed to break the output into separate parameters. To circumvent this problem, this implementation offers quoting. The idea is that output is generated with quotes around each parameter. When this output is once again fed to the shell (usually by a shell eval command), it is split correctly into separate parameters. Quoting is not enabled if the environment variable GETOPT_COMPATIBLE is set, if the first form of the SYNOPSIS is used, or if the option '-u' is found. Different shells use different quoting conventions. You can use the '-s' option to select the shell you are using. The following shells are currently supported: 'sh', 'bash', 'csh' and 'tcsh'. Actually, only two 'flavors' are distinguished: sh-like quoting conventions and csh-like quoting conventions. Chances are that if you use another shell script language, one of these flavors can still be used. SCANNING MODES top The first character of the short options string may be a '-' or a '+' to indicate a special scanning mode. If the first calling form in the SYNOPSIS is used they are ignored; the environment variable POSIXLY_CORRECT is still examined, though. If the first character is '+', or if the environment variable POSIXLY_CORRECT is set, parsing stops as soon as the first non-option parameter (i.e., a parameter that does not start with a '-') is found that is not an option argument. The remaining parameters are all interpreted as non-option parameters. If the first character is a '-', non-option parameters are outputted at the place where they are found; in normal operation, they are all collected at the end of output after a '--' parameter has been generated. Note that this '--' parameter is still generated, but it will always be the last parameter in this mode. COMPATIBILITY top This version of getopt(1) is written to be as compatible as possible to other versions. Usually you can just replace them with this version without any modifications, and with some advantages. If the first character of the first parameter of getopt is not a '-', getopt goes into compatibility mode. It will interpret its first parameter as the string of short options, and all other arguments will be parsed. It will still do parameter shuffling (i.e., all non-option parameters are output at the end), unless the environment variable POSIXLY_CORRECT is set, in which case, getopt will prepend a '+' before short options automatically. The environment variable GETOPT_COMPATIBLE forces getopt into compatibility mode. Setting both this environment variable and POSIXLY_CORRECT offers 100% compatibility for 'difficult' programs. Usually, though, neither is needed. In compatibility mode, leading '-' and '+' characters in the short options string are ignored. RETURN CODES top getopt returns error code 0 for successful parsing, 1 if getopt(3) returns errors, 2 if it does not understand its own parameters, 3 if an internal error occurs like out-of-memory, and 4 if it is called with -T. EXAMPLES top Example scripts for (ba)sh and (t)csh are provided with the getopt(1) distribution, and are installed in /usr/share/doc/util-linux directory. ENVIRONMENT top POSIXLY_CORRECT This environment variable is examined by the getopt(3) routines. If it is set, parsing stops as soon as a parameter is found that is not an option or an option argument. All remaining parameters are also interpreted as non-option parameters, regardless whether they start with a '-'. GETOPT_COMPATIBLE Forces getopt to use the first calling format as specified in the SYNOPSIS. BUGS top getopt(3) can parse long options with optional arguments that are given an empty optional argument (but cannot do this for short options). This getopt(1) treats optional arguments that are empty as if they were not present. The syntax if you do not want any short option variables at all is not very intuitive (you have to set them explicitly to the empty string). AUTHOR top Frodo Looijaard <frodo@frodo.looijaard.name> SEE ALSO top bash(1), tcsh(1), getopt(3) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The getopt command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 GETOPT(1) Pages that refer to this page: getopt(1), git-rev-parse(1), getopt(3) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# getopt\n\n> Parse command-line arguments.\n> More information: <https://www.gnu.org/software/libc/manual/html_node/Getopt.html>.\n\n- Parse optional `verbose`/`version` flags with shorthands:\n\n`getopt --options vV --longoptions verbose,version -- --version --verbose`\n\n- Add a `--file` option with a required argument with shorthand `-f`:\n\n`getopt --options f: --longoptions file: -- --file=somefile`\n\n- Add a `--verbose` option with an optional argument with shorthand `-v`, and pass a non-option parameter `arg`:\n\n`getopt --options v:: --longoptions verbose:: -- --verbose arg`\n\n- Accept a `-r` and `--verbose` flag, a `--accept` option with an optional argument and add a `--target` with a required argument option with shorthands:\n\n`getopt --options rv::s::t: --longoptions verbose,source::,target: -- -v --target target`\n
gfortran
gfortran(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training gfortran(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT | BUGS | SEE ALSO | AUTHOR | COPYRIGHT | COLOPHON GFORTRAN(1) GNU GFORTRAN(1) NAME top gfortran - GNU Fortran compiler SYNOPSIS top gfortran [-c|-S|-E] [-g] [-pg] [-Olevel] [-Wwarn...] [-pedantic] [-Idir...] [-Ldir...] [-Dmacro[=defn]...] [-Umacro] [-foption...] [-mmachine-option...] [-o outfile] infile... Only the most useful options are listed here; see below for the remainder. DESCRIPTION top The gfortran command supports all the options supported by the gcc command. Only options specific to GNU Fortran are documented here. All GCC and GNU Fortran options are accepted both by gfortran and by gcc (as well as any other drivers built at the same time, such as g++), since adding GNU Fortran to the GCC distribution enables acceptance of GNU Fortran options by all of the relevant drivers. In some cases, options have positive and negative forms; the negative form of -ffoo would be -fno-foo. This manual documents only one of these two forms, whichever one is not the default. OPTIONS top Here is a summary of all the options specific to GNU Fortran, grouped by type. Explanations are in the following sections. Fortran Language Options -fall-intrinsics -fbackslash -fcray-pointer -fd-lines-as-code -fd-lines-as-comments -fdec -fdec-structure -fdec-intrinsic-ints -fdec-static -fdec-math -fdec-include -fdefault-double-8 -fdefault-integer-8 -fdefault-real-8 -fdefault-real-10 -fdefault-real-16 -fdollar-ok -ffixed-line-length-n -ffixed-line-length-none -fpad-source -ffree-form -ffree-line-length-n -ffree-line-length-none -fimplicit-none -finteger-4-integer-8 -fmax-identifier-length -fmodule-private -ffixed-form -fno-range-check -fopenacc -fopenmp -freal-4-real-10 -freal-4-real-16 -freal-4-real-8 -freal-8-real-10 -freal-8-real-16 -freal-8-real-4 -std=std -ftest-forall-temp Preprocessing Options -A-question[=answer] -Aquestion=answer -C -CC -Dmacro[=defn] -H -P -Umacro -cpp -dD -dI -dM -dN -dU -fworking-directory -imultilib dir -iprefix file -iquote -isysroot dir -isystem dir -nocpp -nostdinc -undef Error and Warning Options -Waliasing -Wall -Wampersand -Wargument-mismatch -Warray-bounds -Wc-binding-type -Wcharacter-truncation -Wconversion -Wdo-subscript -Wfunction-elimination -Wimplicit-interface -Wimplicit-procedure -Wintrinsic-shadow -Wuse-without-only -Wintrinsics-std -Wline-truncation -Wno-align-commons -Wno-tabs -Wreal-q-constant -Wsurprising -Wunderflow -Wunused-parameter -Wrealloc-lhs -Wrealloc-lhs-all -Wfrontend-loop-interchange -Wtarget-lifetime -fmax-errors=n -fsyntax-only -pedantic -pedantic-errors Debugging Options -fbacktrace -fdump-fortran-optimized -fdump-fortran-original -fdump-fortran-global -fdump-parse-tree -ffpe-trap=list -ffpe-summary=list Directory Options -Idir -Jdir -fintrinsic-modules-path dir Link Options -static-libgfortran Runtime Options -fconvert=conversion -fmax-subrecord-length=length -frecord-marker=length -fsign-zero Interoperability Options -fc-prototypes -fc-prototypes-external Code Generation Options -faggressive-function-elimination -fblas-matmul-limit=n -fbounds-check -ftail-call-workaround -ftail-call-workaround=n -fcheck-array-temporaries -fcheck=<all|array-temps|bounds|do|mem|pointer|recursion> -fcoarray=<none|single|lib> -fexternal-blas -ff2c -ffrontend-loop-interchange -ffrontend-optimize -finit-character=n -finit-integer=n -finit-local-zero -finit-derived -finit-logical=<true|false> -finit-real=<zero|inf|-inf|nan|snan> -finline-matmul-limit=n -fmax-array-constructor=n -fmax-stack-var-size=n -fno-align-commons -fno-automatic -fno-protect-parens -fno-underscoring -fsecond-underscore -fpack-derived -frealloc-lhs -frecursive -frepack-arrays -fshort-enums -fstack-arrays Options controlling Fortran dialect The following options control the details of the Fortran dialect accepted by the compiler: -ffree-form -ffixed-form Specify the layout used by the source file. The free form layout was introduced in Fortran 90. Fixed form was traditionally used in older Fortran programs. When neither option is specified, the source form is determined by the file extension. -fall-intrinsics This option causes all intrinsic procedures (including the GNU-specific extensions) to be accepted. This can be useful with -std=f95 to force standard-compliance but get access to the full range of intrinsics available with gfortran. As a consequence, -Wintrinsics-std will be ignored and no user- defined procedure with the same name as any intrinsic will be called except when it is explicitly declared "EXTERNAL". -fd-lines-as-code -fd-lines-as-comments Enable special treatment for lines beginning with "d" or "D" in fixed form sources. If the -fd-lines-as-code option is given they are treated as if the first column contained a blank. If the -fd-lines-as-comments option is given, they are treated as comment lines. -fdec DEC compatibility mode. Enables extensions and other features that mimic the default behavior of older compilers (such as DEC). These features are non-standard and should be avoided at all costs. For details on GNU Fortran's implementation of these extensions see the full documentation. Other flags enabled by this switch are: -fdollar-ok -fcray-pointer -fdec-structure -fdec-intrinsic-ints -fdec-static -fdec-math If -fd-lines-as-code/-fd-lines-as-comments are unset, then -fdec also sets -fd-lines-as-comments. -fdec-structure Enable DEC "STRUCTURE" and "RECORD" as well as "UNION", "MAP", and dot ('.') as a member separator (in addition to '%'). This is provided for compatibility only; Fortran 90 derived types should be used instead where possible. -fdec-intrinsic-ints Enable B/I/J/K kind variants of existing integer functions (e.g. BIAND, IIAND, JIAND, etc...). For a complete list of intrinsics see the full documentation. -fdec-math Enable legacy math intrinsics such as COTAN and degree-valued trigonometric functions (e.g. TAND, ATAND, etc...) for compatability with older code. -fdec-static Enable DEC-style STATIC and AUTOMATIC attributes to explicitly specify the storage of variables and other objects. -fdec-include Enable parsing of INCLUDE as a statement in addition to parsing it as INCLUDE line. When parsed as INCLUDE statement, INCLUDE does not have to be on a single line and can use line continuations. -fdollar-ok Allow $ as a valid non-first character in a symbol name. Symbols that start with $ are rejected since it is unclear which rules to apply to implicit typing as different vendors implement different rules. Using $ in "IMPLICIT" statements is also rejected. -fbackslash Change the interpretation of backslashes in string literals from a single backslash character to "C-style" escape characters. The following combinations are expanded "\a", "\b", "\f", "\n", "\r", "\t", "\v", "\\", and "\0" to the ASCII characters alert, backspace, form feed, newline, carriage return, horizontal tab, vertical tab, backslash, and NUL, respectively. Additionally, "\x"nn, "\u"nnnn and "\U"nnnnnnnn (where each n is a hexadecimal digit) are translated into the Unicode characters corresponding to the specified code points. All other combinations of a character preceded by \ are unexpanded. -fmodule-private Set the default accessibility of module entities to "PRIVATE". Use-associated entities will not be accessible unless they are explicitly declared as "PUBLIC". -ffixed-line-length-n Set column after which characters are ignored in typical fixed-form lines in the source file, and, unless "-fno-pad-source", through which spaces are assumed (as if padded to that length) after the ends of short fixed-form lines. Popular values for n include 72 (the standard and the default), 80 (card image), and 132 (corresponding to "extended-source" options in some popular compilers). n may also be none, meaning that the entire line is meaningful and that continued character constants never have implicit spaces appended to them to fill out the line. -ffixed-line-length-0 means the same thing as -ffixed-line-length-none. -fno-pad-source By default fixed-form lines have spaces assumed (as if padded to that length) after the ends of short fixed-form lines. This is not done either if -ffixed-line-length-0, -ffixed-line-length-none or if -fno-pad-source option is used. With any of those options continued character constants never have implicit spaces appended to them to fill out the line. -ffree-line-length-n Set column after which characters are ignored in typical free-form lines in the source file. The default value is 132. n may be none, meaning that the entire line is meaningful. -ffree-line-length-0 means the same thing as -ffree-line-length-none. -fmax-identifier-length=n Specify the maximum allowed identifier length. Typical values are 31 (Fortran 95) and 63 (Fortran 2003 and Fortran 2008). -fimplicit-none Specify that no implicit typing is allowed, unless overridden by explicit "IMPLICIT" statements. This is the equivalent of adding "implicit none" to the start of every procedure. -fcray-pointer Enable the Cray pointer extension, which provides C-like pointer functionality. -fopenacc Enable the OpenACC extensions. This includes OpenACC "!$acc" directives in free form and "c$acc", *$acc and "!$acc" directives in fixed form, "!$" conditional compilation sentinels in free form and "c$", "*$" and "!$" sentinels in fixed form, and when linking arranges for the OpenACC runtime library to be linked in. Note that this is an experimental feature, incomplete, and subject to change in future versions of GCC. See <https://gcc.gnu.org/wiki/OpenACC > for more information. -fopenmp Enable the OpenMP extensions. This includes OpenMP "!$omp" directives in free form and "c$omp", *$omp and "!$omp" directives in fixed form, "!$" conditional compilation sentinels in free form and "c$", "*$" and "!$" sentinels in fixed form, and when linking arranges for the OpenMP runtime library to be linked in. The option -fopenmp implies -frecursive. -fno-range-check Disable range checking on results of simplification of constant expressions during compilation. For example, GNU Fortran will give an error at compile time when simplifying "a = 1. / 0". With this option, no error will be given and "a" will be assigned the value "+Infinity". If an expression evaluates to a value outside of the relevant range of ["-HUGE()":"HUGE()"], then the expression will be replaced by "-Inf" or "+Inf" as appropriate. Similarly, "DATA i/Z'FFFFFFFF'/" will result in an integer overflow on most systems, but with -fno-range-check the value will "wrap around" and "i" will be initialized to -1 instead. -fdefault-integer-8 Set the default integer and logical types to an 8 byte wide type. This option also affects the kind of integer constants like 42. Unlike -finteger-4-integer-8, it does not promote variables with explicit kind declaration. -fdefault-real-8 Set the default real type to an 8 byte wide type. This option also affects the kind of non-double real constants like 1.0. This option promotes the default width of "DOUBLE PRECISION" and double real constants like "1.d0" to 16 bytes if possible. If "-fdefault-double-8" is given along with "fdefault-real-8", "DOUBLE PRECISION" and double real constants are not promoted. Unlike -freal-4-real-8, "fdefault-real-8" does not promote variables with explicit kind declarations. -fdefault-real-10 Set the default real type to an 10 byte wide type. This option also affects the kind of non-double real constants like 1.0. This option promotes the default width of "DOUBLE PRECISION" and double real constants like "1.d0" to 16 bytes if possible. If "-fdefault-double-8" is given along with "fdefault-real-10", "DOUBLE PRECISION" and double real constants are not promoted. Unlike -freal-4-real-10, "fdefault-real-10" does not promote variables with explicit kind declarations. -fdefault-real-16 Set the default real type to an 16 byte wide type. This option also affects the kind of non-double real constants like 1.0. This option promotes the default width of "DOUBLE PRECISION" and double real constants like "1.d0" to 16 bytes if possible. If "-fdefault-double-8" is given along with "fdefault-real-16", "DOUBLE PRECISION" and double real constants are not promoted. Unlike -freal-4-real-16, "fdefault-real-16" does not promote variables with explicit kind declarations. -fdefault-double-8 Set the "DOUBLE PRECISION" type and double real constants like "1.d0" to an 8 byte wide type. Do nothing if this is already the default. This option prevents -fdefault-real-8, -fdefault-real-10, and -fdefault-real-16, from promoting "DOUBLE PRECISION" and double real constants like "1.d0" to 16 bytes. -finteger-4-integer-8 Promote all "INTEGER(KIND=4)" entities to an "INTEGER(KIND=8)" entities. If "KIND=8" is unavailable, then an error will be issued. This option should be used with care and may not be suitable for your codes. Areas of possible concern include calls to external procedures, alignment in "EQUIVALENCE" and/or "COMMON", generic interfaces, BOZ literal constant conversion, and I/O. Inspection of the intermediate representation of the translated Fortran code, produced by -fdump-tree-original, is suggested. -freal-4-real-8 -freal-4-real-10 -freal-4-real-16 -freal-8-real-4 -freal-8-real-10 -freal-8-real-16 Promote all "REAL(KIND=M)" entities to "REAL(KIND=N)" entities. If "REAL(KIND=N)" is unavailable, then an error will be issued. All other real kind types are unaffected by this option. These options should be used with care and may not be suitable for your codes. Areas of possible concern include calls to external procedures, alignment in "EQUIVALENCE" and/or "COMMON", generic interfaces, BOZ literal constant conversion, and I/O. Inspection of the intermediate representation of the translated Fortran code, produced by -fdump-tree-original, is suggested. -std=std Specify the standard to which the program is expected to conform, which may be one of f95, f2003, f2008, f2018, gnu, or legacy. The default value for std is gnu, which specifies a superset of the latest Fortran standard that includes all of the extensions supported by GNU Fortran, although warnings will be given for obsolete extensions not recommended for use in new code. The legacy value is equivalent but without the warnings for obsolete extensions, and may be useful for old non-standard programs. The f95, f2003, f2008, and f2018 values specify strict conformance to the Fortran 95, Fortran 2003, Fortran 2008 and Fortran 2018 standards, respectively; errors are given for all extensions beyond the relevant language standard, and warnings are given for the Fortran 77 features that are permitted but obsolescent in later standards. The deprecated option -std=f2008ts acts as an alias for -std=f2018. It is only present for backwards compatibility with earlier gfortran versions and should not be used any more. -ftest-forall-temp Enhance test coverage by forcing most forall assignments to use temporary. Enable and customize preprocessing Preprocessor related options. See section Preprocessing and conditional compilation for more detailed information on preprocessing in gfortran. -cpp -nocpp Enable preprocessing. The preprocessor is automatically invoked if the file extension is .fpp, .FPP, .F, .FOR, .FTN, .F90, .F95, .F03 or .F08. Use this option to manually enable preprocessing of any kind of Fortran file. To disable preprocessing of files with any of the above listed extensions, use the negative form: -nocpp. The preprocessor is run in traditional mode. Any restrictions of the file-format, especially the limits on line length, apply for preprocessed output as well, so it might be advisable to use the -ffree-line-length-none or -ffixed-line-length-none options. -dM Instead of the normal output, generate a list of '#define' directives for all the macros defined during the execution of the preprocessor, including predefined macros. This gives you a way of finding out what is predefined in your version of the preprocessor. Assuming you have no file foo.f90, the command touch foo.f90; gfortran -cpp -E -dM foo.f90 will show all the predefined macros. -dD Like -dM except in two respects: it does not include the predefined macros, and it outputs both the "#define" directives and the result of preprocessing. Both kinds of output go to the standard output file. -dN Like -dD, but emit only the macro names, not their expansions. -dU Like dD except that only macros that are expanded, or whose definedness is tested in preprocessor directives, are output; the output is delayed until the use or test of the macro; and '#undef' directives are also output for macros tested but undefined at the time. -dI Output '#include' directives in addition to the result of preprocessing. -fworking-directory Enable generation of linemarkers in the preprocessor output that will let the compiler know the current working directory at the time of preprocessing. When this option is enabled, the preprocessor will emit, after the initial linemarker, a second linemarker with the current working directory followed by two slashes. GCC will use this directory, when it is present in the preprocessed input, as the directory emitted as the current working directory in some debugging information formats. This option is implicitly enabled if debugging information is enabled, but this can be inhibited with the negated form -fno-working-directory. If the -P flag is present in the command line, this option has no effect, since no "#line" directives are emitted whatsoever. -idirafter dir Search dir for include files, but do it after all directories specified with -I and the standard system directories have been exhausted. dir is treated as a system include directory. If dir begins with "=", then the "=" will be replaced by the sysroot prefix; see --sysroot and -isysroot. -imultilib dir Use dir as a subdirectory of the directory containing target- specific C++ headers. -iprefix prefix Specify prefix as the prefix for subsequent -iwithprefix options. If the prefix represents a directory, you should include the final '/'. -isysroot dir This option is like the --sysroot option, but applies only to header files. See the --sysroot option for more information. -iquote dir Search dir only for header files requested with "#include "file""; they are not searched for "#include <file>", before all directories specified by -I and before the standard system directories. If dir begins with "=", then the "=" will be replaced by the sysroot prefix; see --sysroot and -isysroot. -isystem dir Search dir for header files, after all directories specified by -I but before the standard system directories. Mark it as a system directory, so that it gets the same special treatment as is applied to the standard system directories. If dir begins with "=", then the "=" will be replaced by the sysroot prefix; see --sysroot and -isysroot. -nostdinc Do not search the standard system directories for header files. Only the directories you have specified with -I options (and the directory of the current file, if appropriate) are searched. -undef Do not predefine any system-specific or GCC-specific macros. The standard predefined macros remain defined. -Apredicate=answer Make an assertion with the predicate predicate and answer answer. This form is preferred to the older form -A predicate(answer), which is still supported, because it does not use shell special characters. -A-predicate=answer Cancel an assertion with the predicate predicate and answer answer. -C Do not discard comments. All comments are passed through to the output file, except for comments in processed directives, which are deleted along with the directive. You should be prepared for side effects when using -C; it causes the preprocessor to treat comments as tokens in their own right. For example, comments appearing at the start of what would be a directive line have the effect of turning that line into an ordinary source line, since the first token on the line is no longer a '#'. Warning: this currently handles C-Style comments only. The preprocessor does not yet recognize Fortran-style comments. -CC Do not discard comments, including during macro expansion. This is like -C, except that comments contained within macros are also passed through to the output file where the macro is expanded. In addition to the side-effects of the -C option, the -CC option causes all C++-style comments inside a macro to be converted to C-style comments. This is to prevent later use of that macro from inadvertently commenting out the remainder of the source line. The -CC option is generally used to support lint comments. Warning: this currently handles C- and C++-Style comments only. The preprocessor does not yet recognize Fortran-style comments. -Dname Predefine name as a macro, with definition 1. -Dname=definition The contents of definition are tokenized and processed as if they appeared during translation phase three in a '#define' directive. In particular, the definition will be truncated by embedded newline characters. If you are invoking the preprocessor from a shell or shell- like program you may need to use the shell's quoting syntax to protect characters such as spaces that have a meaning in the shell syntax. If you wish to define a function-like macro on the command line, write its argument list with surrounding parentheses before the equals sign (if any). Parentheses are meaningful to most shells, so you will need to quote the option. With sh and csh, "-D'name(args...)=definition'" works. -D and -U options are processed in the order they are given on the command line. All -imacros file and -include file options are processed after all -D and -U options. -H Print the name of each header file used, in addition to other normal activities. Each name is indented to show how deep in the '#include' stack it is. -P Inhibit generation of linemarkers in the output from the preprocessor. This might be useful when running the preprocessor on something that is not C code, and will be sent to a program which might be confused by the linemarkers. -Uname Cancel any previous definition of name, either built in or provided with a -D option. Options to request or suppress errors and warnings Errors are diagnostic messages that report that the GNU Fortran compiler cannot compile the relevant piece of source code. The compiler will continue to process the program in an attempt to report further errors to aid in debugging, but will not produce any compiled output. Warnings are diagnostic messages that report constructions which are not inherently erroneous but which are risky or suggest there is likely to be a bug in the program. Unless -Werror is specified, they do not prevent compilation of the program. You can request many specific warnings with options beginning -W, for example -Wimplicit to request warnings on implicit declarations. Each of these specific warning options also has a negative form beginning -Wno- to turn off warnings; for example, -Wno-implicit. This manual lists only one of the two forms, whichever is not the default. These options control the amount and kinds of errors and warnings produced by GNU Fortran: -fmax-errors=n Limits the maximum number of error messages to n, at which point GNU Fortran bails out rather than attempting to continue processing the source code. If n is 0, there is no limit on the number of error messages produced. -fsyntax-only Check the code for syntax errors, but do not actually compile it. This will generate module files for each module present in the code, but no other output file. -Wpedantic -pedantic Issue warnings for uses of extensions to Fortran. -pedantic also applies to C-language constructs where they occur in GNU Fortran source files, such as use of \e in a character constant within a directive like "#include". Valid Fortran programs should compile properly with or without this option. However, without this option, certain GNU extensions and traditional Fortran features are supported as well. With this option, many of them are rejected. Some users try to use -pedantic to check programs for conformance. They soon find that it does not do quite what they want---it finds some nonstandard practices, but not all. However, improvements to GNU Fortran in this area are welcome. This should be used in conjunction with -std=f95, -std=f2003, -std=f2008 or -std=f2018. -pedantic-errors Like -pedantic, except that errors are produced rather than warnings. -Wall Enables commonly used warning options pertaining to usage that we recommend avoiding and that we believe are easy to avoid. This currently includes -Waliasing, -Wampersand, -Wconversion, -Wsurprising, -Wc-binding-type, -Wintrinsics-std, -Wtabs, -Wintrinsic-shadow, -Wline-truncation, -Wtarget-lifetime, -Winteger-division, -Wreal-q-constant, -Wunused and -Wundefined-do-loop. -Waliasing Warn about possible aliasing of dummy arguments. Specifically, it warns if the same actual argument is associated with a dummy argument with "INTENT(IN)" and a dummy argument with "INTENT(OUT)" in a call with an explicit interface. The following example will trigger the warning. interface subroutine bar(a,b) integer, intent(in) :: a integer, intent(out) :: b end subroutine end interface integer :: a call bar(a,a) -Wampersand Warn about missing ampersand in continued character constants. The warning is given with -Wampersand, -pedantic, -std=f95, -std=f2003, -std=f2008 and -std=f2018. Note: With no ampersand given in a continued character constant, GNU Fortran assumes continuation at the first non-comment, non- whitespace character after the ampersand that initiated the continuation. -Wargument-mismatch Warn about type, rank, and other mismatches between formal parameters and actual arguments to functions and subroutines. These warnings are recommended and thus enabled by default. -Warray-temporaries Warn about array temporaries generated by the compiler. The information generated by this warning is sometimes useful in optimization, in order to avoid such temporaries. -Wc-binding-type Warn if the a variable might not be C interoperable. In particular, warn if the variable has been declared using an intrinsic type with default kind instead of using a kind parameter defined for C interoperability in the intrinsic "ISO_C_Binding" module. This option is implied by -Wall. -Wcharacter-truncation Warn when a character assignment will truncate the assigned string. -Wline-truncation Warn when a source code line will be truncated. This option is implied by -Wall. For free-form source code, the default is -Werror=line-truncation such that truncations are reported as error. -Wconversion Warn about implicit conversions that are likely to change the value of the expression after conversion. Implied by -Wall. -Wconversion-extra Warn about implicit conversions between different types and kinds. This option does not imply -Wconversion. -Wextra Enables some warning options for usages of language features which may be problematic. This currently includes -Wcompare-reals, -Wunused-parameter and -Wdo-subscript. -Wfrontend-loop-interchange Enable warning for loop interchanges performed by the -ffrontend-loop-interchange option. -Wimplicit-interface Warn if a procedure is called without an explicit interface. Note this only checks that an explicit interface is present. It does not check that the declared interfaces are consistent across program units. -Wimplicit-procedure Warn if a procedure is called that has neither an explicit interface nor has been declared as "EXTERNAL". -Winteger-division Warn if a constant integer division truncates it result. As an example, 3/5 evaluates to 0. -Wintrinsics-std Warn if gfortran finds a procedure named like an intrinsic not available in the currently selected standard (with -std) and treats it as "EXTERNAL" procedure because of this. -fall-intrinsics can be used to never trigger this behavior and always link to the intrinsic regardless of the selected standard. -Wreal-q-constant Produce a warning if a real-literal-constant contains a "q" exponent-letter. -Wsurprising Produce a warning when "suspicious" code constructs are encountered. While technically legal these usually indicate that an error has been made. This currently produces a warning under the following circumstances: * An INTEGER SELECT construct has a CASE that can never be matched as its lower value is greater than its upper value. * A LOGICAL SELECT construct has three CASE statements. * A TRANSFER specifies a source that is shorter than the destination. * The type of a function result is declared more than once with the same type. If -pedantic or standard-conforming mode is enabled, this is an error. * A "CHARACTER" variable is declared with negative length. -Wtabs By default, tabs are accepted as whitespace, but tabs are not members of the Fortran Character Set. For continuation lines, a tab followed by a digit between 1 and 9 is supported. -Wtabs will cause a warning to be issued if a tab is encountered. Note, -Wtabs is active for -pedantic, -std=f95, -std=f2003, -std=f2008, -std=f2018 and -Wall. -Wundefined-do-loop Warn if a DO loop with step either 1 or -1 yields an underflow or an overflow during iteration of an induction variable of the loop. This option is implied by -Wall. -Wunderflow Produce a warning when numerical constant expressions are encountered, which yield an UNDERFLOW during compilation. Enabled by default. -Wintrinsic-shadow Warn if a user-defined procedure or module procedure has the same name as an intrinsic; in this case, an explicit interface or "EXTERNAL" or "INTRINSIC" declaration might be needed to get calls later resolved to the desired intrinsic/procedure. This option is implied by -Wall. -Wuse-without-only Warn if a "USE" statement has no "ONLY" qualifier and thus implicitly imports all public entities of the used module. -Wunused-dummy-argument Warn about unused dummy arguments. This option is implied by -Wall. -Wunused-parameter Contrary to gcc's meaning of -Wunused-parameter, gfortran's implementation of this option does not warn about unused dummy arguments (see -Wunused-dummy-argument), but about unused "PARAMETER" values. -Wunused-parameter is implied by -Wextra if also -Wunused or -Wall is used. -Walign-commons By default, gfortran warns about any occasion of variables being padded for proper alignment inside a "COMMON" block. This warning can be turned off via -Wno-align-commons. See also -falign-commons. -Wfunction-elimination Warn if any calls to impure functions are eliminated by the optimizations enabled by the -ffrontend-optimize option. This option is implied by -Wextra. -Wrealloc-lhs Warn when the compiler might insert code to for allocation or reallocation of an allocatable array variable of intrinsic type in intrinsic assignments. In hot loops, the Fortran 2003 reallocation feature may reduce the performance. If the array is already allocated with the correct shape, consider using a whole-array array-spec (e.g. "(:,:,:)") for the variable on the left-hand side to prevent the reallocation check. Note that in some cases the warning is shown, even if the compiler will optimize reallocation checks away. For instance, when the right-hand side contains the same variable multiplied by a scalar. See also -frealloc-lhs. -Wrealloc-lhs-all Warn when the compiler inserts code to for allocation or reallocation of an allocatable variable; this includes scalars and derived types. -Wcompare-reals Warn when comparing real or complex types for equality or inequality. This option is implied by -Wextra. -Wtarget-lifetime Warn if the pointer in a pointer assignment might be longer than the its target. This option is implied by -Wall. -Wzerotrip Warn if a "DO" loop is known to execute zero times at compile time. This option is implied by -Wall. -Wdo-subscript Warn if an array subscript inside a DO loop could lead to an out-of-bounds access even if the compiler cannot prove that the statement is actually executed, in cases like real a(3) do i=1,4 if (condition(i)) then a(i) = 1.2 end if end do This option is implied by -Wextra. -Werror Turns all warnings into errors. Some of these have no effect when compiling programs written in Fortran. Options for debugging your program or GNU Fortran GNU Fortran has various special options that are used for debugging either your program or the GNU Fortran compiler. -fdump-fortran-original Output the internal parse tree after translating the source program into internal representation. This option is mostly useful for debugging the GNU Fortran compiler itself. The output generated by this option might change between releases. This option may also generate internal compiler errors for features which have only recently been added. -fdump-fortran-optimized Output the parse tree after front-end optimization. Mostly useful for debugging the GNU Fortran compiler itself. The output generated by this option might change between releases. This option may also generate internal compiler errors for features which have only recently been added. -fdump-parse-tree Output the internal parse tree after translating the source program into internal representation. Mostly useful for debugging the GNU Fortran compiler itself. The output generated by this option might change between releases. This option may also generate internal compiler errors for features which have only recently been added. This option is deprecated; use "-fdump-fortran-original" instead. -fdump-fortran-global Output a list of the global identifiers after translating into middle-end representation. Mostly useful for debugging the GNU Fortran compiler itself. The output generated by this option might change between releases. This option may also generate internal compiler errors for features which have only recently been added. -ffpe-trap=list Specify a list of floating point exception traps to enable. On most systems, if a floating point exception occurs and the trap for that exception is enabled, a SIGFPE signal will be sent and the program being aborted, producing a core file useful for debugging. list is a (possibly empty) comma- separated list of the following exceptions: invalid (invalid floating point operation, such as "SQRT(-1.0)"), zero (division by zero), overflow (overflow in a floating point operation), underflow (underflow in a floating point operation), inexact (loss of precision during operation), and denormal (operation performed on a denormal value). The first five exceptions correspond to the five IEEE 754 exceptions, whereas the last one (denormal) is not part of the IEEE 754 standard but is available on some common architectures such as x86. The first three exceptions (invalid, zero, and overflow) often indicate serious errors, and unless the program has provisions for dealing with these exceptions, enabling traps for these three exceptions is probably a good idea. If the option is used more than once in the command line, the lists will be joined: '"ffpe-trap="list1 "ffpe-trap="list2' is equivalent to "ffpe-trap="list1,list2. Note that once enabled an exception cannot be disabled (no negative form). Many, if not most, floating point operations incur loss of precision due to rounding, and hence the "ffpe-trap=inexact" is likely to be uninteresting in practice. By default no exception traps are enabled. -ffpe-summary=list Specify a list of floating-point exceptions, whose flag status is printed to "ERROR_UNIT" when invoking "STOP" and "ERROR STOP". list can be either none, all or a comma- separated list of the following exceptions: invalid, zero, overflow, underflow, inexact and denormal. (See -ffpe-trap for a description of the exceptions.) If the option is used more than once in the command line, only the last one will be used. By default, a summary for all exceptions but inexact is shown. -fno-backtrace When a serious runtime error is encountered or a deadly signal is emitted (segmentation fault, illegal instruction, bus error, floating-point exception, and the other POSIX signals that have the action core), the Fortran runtime library tries to output a backtrace of the error. "-fno-backtrace" disables the backtrace generation. This option only has influence for compilation of the Fortran main program. Options for directory search These options affect how GNU Fortran searches for files specified by the "INCLUDE" directive and where it searches for previously compiled modules. It also affects the search paths used by cpp when used to preprocess Fortran source. -Idir These affect interpretation of the "INCLUDE" directive (as well as of the "#include" directive of the cpp preprocessor). Also note that the general behavior of -I and "INCLUDE" is pretty much the same as of -I with "#include" in the cpp preprocessor, with regard to looking for header.gcc files and other such things. This path is also used to search for .mod files when previously compiled modules are required by a "USE" statement. -Jdir This option specifies where to put .mod files for compiled modules. It is also added to the list of directories to searched by an "USE" statement. The default is the current directory. -fintrinsic-modules-path dir This option specifies the location of pre-compiled intrinsic modules, if they are not in the default location expected by the compiler. Influencing the linking step These options come into play when the compiler links object files into an executable output file. They are meaningless if the compiler is not doing a link step. -static-libgfortran On systems that provide libgfortran as a shared and a static library, this option forces the use of the static version. If no shared version of libgfortran was built when the compiler was configured, this option has no effect. Influencing runtime behavior These options affect the runtime behavior of programs compiled with GNU Fortran. -fconvert=conversion Specify the representation of data for unformatted files. Valid values for conversion are: native, the default; swap, swap between big- and little-endian; big-endian, use big- endian representation for unformatted files; little-endian, use little-endian representation for unformatted files. This option has an effect only when used in the main program. The "CONVERT" specifier and the GFORTRAN_CONVERT_UNIT environment variable override the default specified by -fconvert. -frecord-marker=length Specify the length of record markers for unformatted files. Valid values for length are 4 and 8. Default is 4. This is different from previous versions of gfortran, which specified a default record marker length of 8 on most systems. If you want to read or write files compatible with earlier versions of gfortran, use -frecord-marker=8. -fmax-subrecord-length=length Specify the maximum length for a subrecord. The maximum permitted value for length is 2147483639, which is also the default. Only really useful for use by the gfortran testsuite. -fsign-zero When enabled, floating point numbers of value zero with the sign bit set are written as negative number in formatted output and treated as negative in the "SIGN" intrinsic. -fno-sign-zero does not print the negative sign of zero values (or values rounded to zero for I/O) and regards zero as positive number in the "SIGN" intrinsic for compatibility with Fortran 77. The default is -fsign-zero. Options for code generation conventions These machine-independent options control the interface conventions used in code generation. Most of them have both positive and negative forms; the negative form of -ffoo would be -fno-foo. In the table below, only one of the forms is listed---the one which is not the default. You can figure out the other form by either removing no- or adding it. -fno-automatic Treat each program unit (except those marked as RECURSIVE) as if the "SAVE" statement were specified for every local variable and array referenced in it. Does not affect common blocks. (Some Fortran compilers provide this option under the name -static or -save.) The default, which is -fautomatic, uses the stack for local variables smaller than the value given by -fmax-stack-var-size. Use the option -frecursive to use no static memory. Local variables or arrays having an explicit "SAVE" attribute are silently ignored unless the -pedantic option is added. -ff2c Generate code designed to be compatible with code generated by g77 and f2c. The calling conventions used by g77 (originally implemented in f2c) require functions that return type default "REAL" to actually return the C type "double", and functions that return type "COMPLEX" to return the values via an extra argument in the calling sequence that points to where to store the return value. Under the default GNU calling conventions, such functions simply return their results as they would in GNU C---default "REAL" functions return the C type "float", and "COMPLEX" functions return the GNU C type "complex". Additionally, this option implies the -fsecond-underscore option, unless -fno-second-underscore is explicitly requested. This does not affect the generation of code that interfaces with the libgfortran library. Caution: It is not a good idea to mix Fortran code compiled with -ff2c with code compiled with the default -fno-f2c calling conventions as, calling "COMPLEX" or default "REAL" functions between program parts which were compiled with different calling conventions will break at execution time. Caution: This will break code which passes intrinsic functions of type default "REAL" or "COMPLEX" as actual arguments, as the library implementations use the -fno-f2c calling conventions. -fno-underscoring Do not transform names of entities specified in the Fortran source file by appending underscores to them. With -funderscoring in effect, GNU Fortran appends one underscore to external names with no underscores. This is done to ensure compatibility with code produced by many UNIX Fortran compilers. Caution: The default behavior of GNU Fortran is incompatible with f2c and g77, please use the -ff2c option if you want object files compiled with GNU Fortran to be compatible with object code created with these tools. Use of -fno-underscoring is not recommended unless you are experimenting with issues such as integration of GNU Fortran into existing system environments (vis-a-vis existing libraries, tools, and so on). For example, with -funderscoring, and assuming that "j()" and "max_count()" are external functions while "my_var" and "lvar" are local variables, a statement like I = J() + MAX_COUNT (MY_VAR, LVAR) is implemented as something akin to: i = j_() + max_count__(&my_var__, &lvar); With -fno-underscoring, the same statement is implemented as: i = j() + max_count(&my_var, &lvar); Use of -fno-underscoring allows direct specification of user- defined names while debugging and when interfacing GNU Fortran code with other languages. Note that just because the names match does not mean that the interface implemented by GNU Fortran for an external name matches the interface implemented by some other language for that same name. That is, getting code produced by GNU Fortran to link to code produced by some other compiler using this or any other method can be only a small part of the overall solution---getting the code generated by both compilers to agree on issues other than naming can require significant effort, and, unlike naming disagreements, linkers normally cannot detect disagreements in these other areas. Also, note that with -fno-underscoring, the lack of appended underscores introduces the very real possibility that a user- defined external name will conflict with a name in a system library, which could make finding unresolved-reference bugs quite difficult in some cases---they might occur at program run time, and show up only as buggy behavior at run time. In future versions of GNU Fortran we hope to improve naming and linking issues so that debugging always involves using the names as they appear in the source, even if the names as seen by the linker are mangled to prevent accidental linking between procedures with incompatible interfaces. -fsecond-underscore By default, GNU Fortran appends an underscore to external names. If this option is used GNU Fortran appends two underscores to names with underscores and one underscore to external names with no underscores. GNU Fortran also appends two underscores to internal names with underscores to avoid naming collisions with external names. This option has no effect if -fno-underscoring is in effect. It is implied by the -ff2c option. Otherwise, with this option, an external name such as "MAX_COUNT" is implemented as a reference to the link-time external symbol "max_count__", instead of "max_count_". This is required for compatibility with g77 and f2c, and is implied by use of the -ff2c option. -fcoarray=<keyword> none Disable coarray support; using coarray declarations and image-control statements will produce a compile-time error. (Default) single Single-image mode, i.e. "num_images()" is always one. lib Library-based coarray parallelization; a suitable GNU Fortran coarray library needs to be linked. -fcheck=<keyword> Enable the generation of run-time checks; the argument shall be a comma-delimited list of the following keywords. Prefixing a check with no- disables it if it was activated by a previous specification. all Enable all run-time test of -fcheck. array-temps Warns at run time when for passing an actual argument a temporary array had to be generated. The information generated by this warning is sometimes useful in optimization, in order to avoid such temporaries. Note: The warning is only printed once per location. bounds Enable generation of run-time checks for array subscripts and against the declared minimum and maximum values. It also checks array indices for assumed and deferred shape arrays against the actual allocated bounds and ensures that all string lengths are equal for character array constructors without an explicit typespec. Some checks require that -fcheck=bounds is set for the compilation of the main program. Note: In the future this may also include other forms of checking, e.g., checking substring references. do Enable generation of run-time checks for invalid modification of loop iteration variables. mem Enable generation of run-time checks for memory allocation. Note: This option does not affect explicit allocations using the "ALLOCATE" statement, which will be always checked. pointer Enable generation of run-time checks for pointers and allocatables. recursion Enable generation of run-time checks for recursively called subroutines and functions which are not marked as recursive. See also -frecursive. Note: This check does not work for OpenMP programs and is disabled if used together with -frecursive and -fopenmp. Example: Assuming you have a file foo.f90, the command gfortran -fcheck=all,no-array-temps foo.f90 will compile the file with all checks enabled as specified above except warnings for generated array temporaries. -fbounds-check Deprecated alias for -fcheck=bounds. -ftail-call-workaround -ftail-call-workaround=n Some C interfaces to Fortran codes violate the gfortran ABI by omitting the hidden character length arguments as described in This can lead to crashes because pushing arguments for tail calls can overflow the stack. To provide a workaround for existing binary packages, this option disables tail call optimization for gfortran procedures with character arguments. With -ftail-call-workaround=2 tail call optimization is disabled in all gfortran procedures with character arguments, with -ftail-call-workaround=1 or equivalent -ftail-call-workaround only in gfortran procedures with character arguments that call implicitly prototyped procedures. Using this option can lead to problems including crashes due to insufficient stack space. It is very strongly recommended to fix the code in question. The -fc-prototypes-external option can be used to generate prototypes which conform to gfortran's ABI, for inclusion in the source code. Support for this option will likely be withdrawn in a future release of gfortran. The negative form, -fno-tail-call-workaround or equivalent -ftail-call-workaround=0, can be used to disable this option. Default is currently -ftail-call-workaround, this will change in future releases. -fcheck-array-temporaries Deprecated alias for -fcheck=array-temps. -fmax-array-constructor=n This option can be used to increase the upper limit permitted in array constructors. The code below requires this option to expand the array at compile time. program test implicit none integer j integer, parameter :: n = 100000 integer, parameter :: i(n) = (/ (2*j, j = 1, n) /) print '(10(I0,1X))', i end program test Caution: This option can lead to long compile times and excessively large object files. The default value for n is 65535. -fmax-stack-var-size=n This option specifies the size in bytes of the largest array that will be put on the stack; if the size is exceeded static memory is used (except in procedures marked as RECURSIVE). Use the option -frecursive to allow for recursive procedures which do not have a RECURSIVE attribute or for parallel programs. Use -fno-automatic to never use the stack. This option currently only affects local arrays declared with constant bounds, and may not apply to all character variables. Future versions of GNU Fortran may improve this behavior. The default value for n is 32768. -fstack-arrays Adding this option will make the Fortran compiler put all arrays of unknown size and array temporaries onto stack memory. If your program uses very large local arrays it is possible that you will have to extend your runtime limits for stack memory on some operating systems. This flag is enabled by default at optimization level -Ofast unless -fmax-stack-var-size is specified. -fpack-derived This option tells GNU Fortran to pack derived type members as closely as possible. Code compiled with this option is likely to be incompatible with code compiled without this option, and may execute slower. -frepack-arrays In some circumstances GNU Fortran may pass assumed shape array sections via a descriptor describing a noncontiguous area of memory. This option adds code to the function prologue to repack the data into a contiguous block at runtime. This should result in faster accesses to the array. However it can introduce significant overhead to the function call, especially when the passed data is noncontiguous. -fshort-enums This option is provided for interoperability with C code that was compiled with the -fshort-enums option. It will make GNU Fortran choose the smallest "INTEGER" kind a given enumerator set will fit in, and give all its enumerators this kind. -fexternal-blas This option will make gfortran generate calls to BLAS functions for some matrix operations like "MATMUL", instead of using our own algorithms, if the size of the matrices involved is larger than a given limit (see -fblas-matmul-limit). This may be profitable if an optimized vendor BLAS library is available. The BLAS library will have to be specified at link time. -fblas-matmul-limit=n Only significant when -fexternal-blas is in effect. Matrix multiplication of matrices with size larger than (or equal to) n will be performed by calls to BLAS functions, while others will be handled by gfortran internal algorithms. If the matrices involved are not square, the size comparison is performed using the geometric mean of the dimensions of the argument and result matrices. The default value for n is 30. -finline-matmul-limit=n When front-end optimiztion is active, some calls to the "MATMUL" intrinsic function will be inlined. This may result in code size increase if the size of the matrix cannot be determined at compile time, as code for both cases is generated. Setting "-finline-matmul-limit=0" will disable inlining in all cases. Setting this option with a value of n will produce inline code for matrices with size up to n. If the matrices involved are not square, the size comparison is performed using the geometric mean of the dimensions of the argument and result matrices. The default value for n is 30. The "-fblas-matmul-limit" can be used to change this value. -frecursive Allow indirect recursion by forcing all local arrays to be allocated on the stack. This flag cannot be used together with -fmax-stack-var-size= or -fno-automatic. -finit-local-zero -finit-derived -finit-integer=n -finit-real=<zero|inf|-inf|nan|snan> -finit-logical=<true|false> -finit-character=n The -finit-local-zero option instructs the compiler to initialize local "INTEGER", "REAL", and "COMPLEX" variables to zero, "LOGICAL" variables to false, and "CHARACTER" variables to a string of null bytes. Finer-grained initialization options are provided by the -finit-integer=n, -finit-real=<zero|inf|-inf|nan|snan> (which also initializes the real and imaginary parts of local "COMPLEX" variables), -finit-logical=<true|false>, and -finit-character=n (where n is an ASCII character value) options. With -finit-derived, components of derived type variables will be initialized according to these flags. Components whose type is not covered by an explicit -finit-* flag will be treated as described above with -finit-local-zero. These options do not initialize * objects with the POINTER attribute * allocatable arrays * variables that appear in an "EQUIVALENCE" statement. (These limitations may be removed in future releases). Note that the -finit-real=nan option initializes "REAL" and "COMPLEX" variables with a quiet NaN. For a signalling NaN use -finit-real=snan; note, however, that compile-time optimizations may convert them into quiet NaN and that trapping needs to be enabled (e.g. via -ffpe-trap). The -finit-integer option will parse the value into an integer of type "INTEGER(kind=C_LONG)" on the host. Said value is then assigned to the integer variables in the Fortran code, which might result in wraparound if the value is too large for the kind. Finally, note that enabling any of the -finit-* options will silence warnings that would have been emitted by -Wuninitialized for the affected local variables. -falign-commons By default, gfortran enforces proper alignment of all variables in a "COMMON" block by padding them as needed. On certain platforms this is mandatory, on others it increases performance. If a "COMMON" block is not declared with consistent data types everywhere, this padding can cause trouble, and -fno-align-commons can be used to disable automatic alignment. The same form of this option should be used for all files that share a "COMMON" block. To avoid potential alignment issues in "COMMON" blocks, it is recommended to order objects from largest to smallest. -fno-protect-parens By default the parentheses in expression are honored for all optimization levels such that the compiler does not do any re-association. Using -fno-protect-parens allows the compiler to reorder "REAL" and "COMPLEX" expressions to produce faster code. Note that for the re-association optimization -fno-signed-zeros and -fno-trapping-math need to be in effect. The parentheses protection is enabled by default, unless -Ofast is given. -frealloc-lhs An allocatable left-hand side of an intrinsic assignment is automatically (re)allocated if it is either unallocated or has a different shape. The option is enabled by default except when -std=f95 is given. See also -Wrealloc-lhs. -faggressive-function-elimination Functions with identical argument lists are eliminated within statements, regardless of whether these functions are marked "PURE" or not. For example, in a = f(b,c) + f(b,c) there will only be a single call to "f". This option only works if -ffrontend-optimize is in effect. -ffrontend-optimize This option performs front-end optimization, based on manipulating parts the Fortran parse tree. Enabled by default by any -O option except -O0 and -Og. Optimizations enabled by this option include: *<inlining calls to "MATMUL",> *<elimination of identical function calls within expressions,> *<removing unnecessary calls to "TRIM" in comparisons and assignments,> *<replacing TRIM(a) with "a(1:LEN_TRIM(a))" and> *<short-circuiting of logical operators (".AND." and ".OR.").> It can be deselected by specifying -fno-frontend-optimize. -ffrontend-loop-interchange Attempt to interchange loops in the Fortran front end where profitable. Enabled by default by any -O option. At the moment, this option only affects "FORALL" and "DO CONCURRENT" statements with several forall triplets. ENVIRONMENT top The gfortran compiler currently does not make use of any environment variables to control its operation above and beyond those that affect the operation of gcc. BUGS top For instructions on reporting bugs, see <https://gcc.gnu.org/bugs/ >. SEE ALSO top gpl(7), gfdl(7), fsf-funding(7), cpp(1), gcov(1), gcc(1), as(1), ld(1), gdb(1), dbx(1) and the Info entries for gcc, cpp, gfortran, as, ld, binutils and gdb. AUTHOR top See the Info entry for gfortran for contributors to GCC and GNU Fortran. COPYRIGHT top Copyright (c) 2004-2019 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being "Funding Free Software", the Front-Cover Texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the gfdl(7) man page. (a) The FSF's Front-Cover Text is: A GNU Manual (b) The FSF's Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. COLOPHON top This page is part of the gcc (GNU Compiler Collection) project. Information about the project can be found at http://gcc.gnu.org/. If you have a bug report for this manual page, see http://gcc.gnu.org/bugs/. This page was obtained from the tarball gcc-9.5.0.tar.xz fetched from ftp://ftp.gwdg.de/pub/misc/gcc/releases/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org gcc-9.5.0 2022-05-27 GFORTRAN(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# gfortran\n\n> Preprocess and compile Fortran source files, then assemble and link them together.\n> More information: <https://gcc.gnu.org/wiki/GFortran>.\n\n- Compile multiple source files into an executable:\n\n`gfortran {{path/to/source1.f90 path/to/source2.f90 ...}} -o {{path/to/output_executable}}`\n\n- Show common warnings, debug symbols in output, and optimize without affecting debugging:\n\n`gfortran {{path/to/source.f90}} -Wall -g -Og -o {{path/to/output_executable}}`\n\n- Include libraries from a different path:\n\n`gfortran {{path/to/source.f90}} -o {{path/to/output_executable}} -I{{path/to/mod_and_include}} -L{{path/to/library}} -l{{library_name}}`\n\n- Compile source code into Assembler instructions:\n\n`gfortran -S {{path/to/source.f90}}`\n\n- Compile source code into an object file without linking:\n\n`gfortran -c {{path/to/source.f90}}`\n
git
git(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | GIT COMMANDS | HIGH-LEVEL COMMANDS (PORCELAIN) | LOW-LEVEL COMMANDS (PLUMBING) | GUIDES | REPOSITORY, COMMAND AND FILE INTERFACES | FILE FORMATS, PROTOCOLS AND OTHER DEVELOPER INTERFACES | CONFIGURATION MECHANISM | IDENTIFIER TERMINOLOGY | SYMBOLIC IDENTIFIERS | FILE/DIRECTORY STRUCTURE | TERMINOLOGY | ENVIRONMENT VARIABLES | DISCUSSION | FURTHER DOCUMENTATION | AUTHORS | REPORTING BUGS | SEE ALSO | GIT | NOTES | COLOPHON GIT(1) Git Manual GIT(1) NAME top git - the stupid content tracker SYNOPSIS top git [-v | --version] [-h | --help] [-C <path>] [-c <name>=<value>] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path] [-p|--paginate|-P|--no-pager] [--no-replace-objects] [--bare] [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>] [--config-env=<name>=<envvar>] <command> [<args>] DESCRIPTION top Git is a fast, scalable, distributed revision control system with an unusually rich command set that provides both high-level operations and full access to internals. See gittutorial(7) to get started, then see giteveryday(7) for a useful minimum set of commands. The Git Users Manual[1] has a more in-depth introduction. After you mastered the basic concepts, you can come back to this page to learn what commands Git offers. You can learn more about individual Git commands with "git help command". gitcli(7) manual page gives you an overview of the command-line command syntax. A formatted and hyperlinked copy of the latest Git documentation can be viewed at https://git.github.io/htmldocs/git.html or https://git-scm.com/docs . OPTIONS top -v, --version Prints the Git suite version that the git program came from. This option is internally converted to git version ... and accepts the same options as the git-version(1) command. If --help is also given, it takes precedence over --version. -h, --help Prints the synopsis and a list of the most commonly used commands. If the option --all or -a is given then all available commands are printed. If a Git command is named this option will bring up the manual page for that command. Other options are available to control how the manual page is displayed. See git-help(1) for more information, because git --help ... is converted internally into git help .... -C <path> Run as if git was started in <path> instead of the current working directory. When multiple -C options are given, each subsequent non-absolute -C <path> is interpreted relative to the preceding -C <path>. If <path> is present but empty, e.g. -C "", then the current working directory is left unchanged. This option affects options that expect path name like --git-dir and --work-tree in that their interpretations of the path names would be made relative to the working directory caused by the -C option. For example the following invocations are equivalent: git --git-dir=a.git --work-tree=b -C c status git --git-dir=c/a.git --work-tree=c/b status -c <name>=<value> Pass a configuration parameter to the command. The value given will override values from configuration files. The <name> is expected in the same format as listed by git config (subkeys separated by dots). Note that omitting the = in git -c foo.bar ... is allowed and sets foo.bar to the boolean true value (just like [foo]bar would in a config file). Including the equals but with an empty value (like git -c foo.bar= ...) sets foo.bar to the empty string which git config --type=bool will convert to false. --config-env=<name>=<envvar> Like -c <name>=<value>, give configuration variable <name> a value, where <envvar> is the name of an environment variable from which to retrieve the value. Unlike -c there is no shortcut for directly setting the value to an empty string, instead the environment variable itself must be set to the empty string. It is an error if the <envvar> does not exist in the environment. <envvar> may not contain an equals sign to avoid ambiguity with <name> containing one. This is useful for cases where you want to pass transitory configuration options to git, but are doing so on operating systems where other processes might be able to read your command line (e.g. /proc/self/cmdline), but not your environment (e.g. /proc/self/environ). That behavior is the default on Linux, but may not be on your system. Note that this might add security for variables such as http.extraHeader where the sensitive information is part of the value, but not e.g. url.<base>.insteadOf where the sensitive information can be part of the key. --exec-path[=<path>] Path to wherever your core Git programs are installed. This can also be controlled by setting the GIT_EXEC_PATH environment variable. If no path is given, git will print the current setting and then exit. --html-path Print the path, without trailing slash, where Gits HTML documentation is installed and exit. --man-path Print the manpath (see man(1)) for the man pages for this version of Git and exit. --info-path Print the path where the Info files documenting this version of Git are installed and exit. -p, --paginate Pipe all output into less (or if set, $PAGER) if standard output is a terminal. This overrides the pager.<cmd> configuration options (see the "Configuration Mechanism" section below). -P, --no-pager Do not pipe Git output into a pager. --git-dir=<path> Set the path to the repository (".git" directory). This can also be controlled by setting the GIT_DIR environment variable. It can be an absolute path or relative path to current working directory. Specifying the location of the ".git" directory using this option (or GIT_DIR environment variable) turns off the repository discovery that tries to find a directory with ".git" subdirectory (which is how the repository and the top-level of the working tree are discovered), and tells Git that you are at the top level of the working tree. If you are not at the top-level directory of the working tree, you should tell Git where the top-level of the working tree is, with the --work-tree=<path> option (or GIT_WORK_TREE environment variable) If you just want to run git as if it was started in <path> then use git -C <path>. --work-tree=<path> Set the path to the working tree. It can be an absolute path or a path relative to the current working directory. This can also be controlled by setting the GIT_WORK_TREE environment variable and the core.worktree configuration variable (see core.worktree in git-config(1) for a more detailed discussion). --namespace=<path> Set the Git namespace. See gitnamespaces(7) for more details. Equivalent to setting the GIT_NAMESPACE environment variable. --bare Treat the repository as a bare repository. If GIT_DIR environment is not set, it is set to the current working directory. --no-replace-objects Do not use replacement refs to replace Git objects. See git-replace(1) for more information. --literal-pathspecs Treat pathspecs literally (i.e. no globbing, no pathspec magic). This is equivalent to setting the GIT_LITERAL_PATHSPECS environment variable to 1. --glob-pathspecs Add "glob" magic to all pathspec. This is equivalent to setting the GIT_GLOB_PATHSPECS environment variable to 1. Disabling globbing on individual pathspecs can be done using pathspec magic ":(literal)" --noglob-pathspecs Add "literal" magic to all pathspec. This is equivalent to setting the GIT_NOGLOB_PATHSPECS environment variable to 1. Enabling globbing on individual pathspecs can be done using pathspec magic ":(glob)" --icase-pathspecs Add "icase" magic to all pathspec. This is equivalent to setting the GIT_ICASE_PATHSPECS environment variable to 1. --no-optional-locks Do not perform optional operations that require locks. This is equivalent to setting the GIT_OPTIONAL_LOCKS to 0. --list-cmds=group[,group...] List commands by group. This is an internal/experimental option and may change or be removed in the future. Supported groups are: builtins, parseopt (builtin commands that use parse-options), main (all commands in libexec directory), others (all other commands in $PATH that have git- prefix), list-<category> (see categories in command-list.txt), nohelpers (exclude helper commands), alias and config (retrieve command list from config variable completion.commands) --attr-source=<tree-ish> Read gitattributes from <tree-ish> instead of the worktree. See gitattributes(5). This is equivalent to setting the GIT_ATTR_SOURCE environment variable. GIT COMMANDS top We divide Git into high level ("porcelain") commands and low level ("plumbing") commands. HIGH-LEVEL COMMANDS (PORCELAIN) top We separate the porcelain commands into the main commands and some ancillary user utilities. Main porcelain commands git-add(1) Add file contents to the index. git-am(1) Apply a series of patches from a mailbox. git-archive(1) Create an archive of files from a named tree. git-bisect(1) Use binary search to find the commit that introduced a bug. git-branch(1) List, create, or delete branches. git-bundle(1) Move objects and refs by archive. git-checkout(1) Switch branches or restore working tree files. git-cherry-pick(1) Apply the changes introduced by some existing commits. git-citool(1) Graphical alternative to git-commit. git-clean(1) Remove untracked files from the working tree. git-clone(1) Clone a repository into a new directory. git-commit(1) Record changes to the repository. git-describe(1) Give an object a human readable name based on an available ref. git-diff(1) Show changes between commits, commit and working tree, etc. git-fetch(1) Download objects and refs from another repository. git-format-patch(1) Prepare patches for e-mail submission. git-gc(1) Cleanup unnecessary files and optimize the local repository. git-grep(1) Print lines matching a pattern. git-gui(1) A portable graphical interface to Git. git-init(1) Create an empty Git repository or reinitialize an existing one. git-log(1) Show commit logs. git-maintenance(1) Run tasks to optimize Git repository data. git-merge(1) Join two or more development histories together. git-mv(1) Move or rename a file, a directory, or a symlink. git-notes(1) Add or inspect object notes. git-pull(1) Fetch from and integrate with another repository or a local branch. git-push(1) Update remote refs along with associated objects. git-range-diff(1) Compare two commit ranges (e.g. two versions of a branch). git-rebase(1) Reapply commits on top of another base tip. git-reset(1) Reset current HEAD to the specified state. git-restore(1) Restore working tree files. git-revert(1) Revert some existing commits. git-rm(1) Remove files from the working tree and from the index. git-shortlog(1) Summarize git log output. git-show(1) Show various types of objects. git-sparse-checkout(1) Reduce your working tree to a subset of tracked files. git-stash(1) Stash the changes in a dirty working directory away. git-status(1) Show the working tree status. git-submodule(1) Initialize, update or inspect submodules. git-switch(1) Switch branches. git-tag(1) Create, list, delete or verify a tag object signed with GPG. git-worktree(1) Manage multiple working trees. gitk(1) The Git repository browser. scalar(1) A tool for managing large Git repositories. Ancillary Commands Manipulators: git-config(1) Get and set repository or global options. git-fast-export(1) Git data exporter. git-fast-import(1) Backend for fast Git data importers. git-filter-branch(1) Rewrite branches. git-mergetool(1) Run merge conflict resolution tools to resolve merge conflicts. git-pack-refs(1) Pack heads and tags for efficient repository access. git-prune(1) Prune all unreachable objects from the object database. git-reflog(1) Manage reflog information. git-remote(1) Manage set of tracked repositories. git-repack(1) Pack unpacked objects in a repository. git-replace(1) Create, list, delete refs to replace objects. Interrogators: git-annotate(1) Annotate file lines with commit information. git-blame(1) Show what revision and author last modified each line of a file. git-bugreport(1) Collect information for user to file a bug report. git-count-objects(1) Count unpacked number of objects and their disk consumption. git-diagnose(1) Generate a zip archive of diagnostic information. git-difftool(1) Show changes using common diff tools. git-fsck(1) Verifies the connectivity and validity of the objects in the database. git-help(1) Display help information about Git. git-instaweb(1) Instantly browse your working repository in gitweb. git-merge-tree(1) Perform merge without touching index or working tree. git-rerere(1) Reuse recorded resolution of conflicted merges. git-show-branch(1) Show branches and their commits. git-verify-commit(1) Check the GPG signature of commits. git-verify-tag(1) Check the GPG signature of tags. git-version(1) Display version information about Git. git-whatchanged(1) Show logs with differences each commit introduces. gitweb(1) Git web interface (web frontend to Git repositories). Interacting with Others These commands are to interact with foreign SCM and with other people via patch over e-mail. git-archimport(1) Import a GNU Arch repository into Git. git-cvsexportcommit(1) Export a single commit to a CVS checkout. git-cvsimport(1) Salvage your data out of another SCM people love to hate. git-cvsserver(1) A CVS server emulator for Git. git-imap-send(1) Send a collection of patches from stdin to an IMAP folder. git-p4(1) Import from and submit to Perforce repositories. git-quiltimport(1) Applies a quilt patchset onto the current branch. git-request-pull(1) Generates a summary of pending changes. git-send-email(1) Send a collection of patches as emails. git-svn(1) Bidirectional operation between a Subversion repository and Git. Reset, restore and revert There are three commands with similar names: git reset, git restore and git revert. git-revert(1) is about making a new commit that reverts the changes made by other commits. git-restore(1) is about restoring files in the working tree from either the index or another commit. This command does not update your branch. The command can also be used to restore files in the index from another commit. git-reset(1) is about updating your branch, moving the tip in order to add or remove commits from the branch. This operation changes the commit history. git reset can also be used to restore the index, overlapping with git restore. LOW-LEVEL COMMANDS (PLUMBING) top Although Git includes its own porcelain layer, its low-level commands are sufficient to support development of alternative porcelains. Developers of such porcelains might start by reading about git-update-index(1) and git-read-tree(1). The interface (input, output, set of options and the semantics) to these low-level commands are meant to be a lot more stable than Porcelain level commands, because these commands are primarily for scripted use. The interface to Porcelain commands on the other hand are subject to change in order to improve the end user experience. The following description divides the low-level commands into commands that manipulate objects (in the repository, index, and working tree), commands that interrogate and compare objects, and commands that move objects and references between repositories. Manipulation commands git-apply(1) Apply a patch to files and/or to the index. git-checkout-index(1) Copy files from the index to the working tree. git-commit-graph(1) Write and verify Git commit-graph files. git-commit-tree(1) Create a new commit object. git-hash-object(1) Compute object ID and optionally create an object from a file. git-index-pack(1) Build pack index file for an existing packed archive. git-merge-file(1) Run a three-way file merge. git-merge-index(1) Run a merge for files needing merging. git-mktag(1) Creates a tag object with extra validation. git-mktree(1) Build a tree-object from ls-tree formatted text. git-multi-pack-index(1) Write and verify multi-pack-indexes. git-pack-objects(1) Create a packed archive of objects. git-prune-packed(1) Remove extra objects that are already in pack files. git-read-tree(1) Reads tree information into the index. git-replay(1) EXPERIMENTAL: Replay commits on a new base, works with bare repos too. git-symbolic-ref(1) Read, modify and delete symbolic refs. git-unpack-objects(1) Unpack objects from a packed archive. git-update-index(1) Register file contents in the working tree to the index. git-update-ref(1) Update the object name stored in a ref safely. git-write-tree(1) Create a tree object from the current index. Interrogation commands git-cat-file(1) Provide contents or details of repository objects. git-cherry(1) Find commits yet to be applied to upstream. git-diff-files(1) Compares files in the working tree and the index. git-diff-index(1) Compare a tree to the working tree or index. git-diff-tree(1) Compares the content and mode of blobs found via two tree objects. git-for-each-ref(1) Output information on each ref. git-for-each-repo(1) Run a Git command on a list of repositories. git-get-tar-commit-id(1) Extract commit ID from an archive created using git-archive. git-ls-files(1) Show information about files in the index and the working tree. git-ls-remote(1) List references in a remote repository. git-ls-tree(1) List the contents of a tree object. git-merge-base(1) Find as good common ancestors as possible for a merge. git-name-rev(1) Find symbolic names for given revs. git-pack-redundant(1) Find redundant pack files. git-rev-list(1) Lists commit objects in reverse chronological order. git-rev-parse(1) Pick out and massage parameters. git-show-index(1) Show packed archive index. git-show-ref(1) List references in a local repository. git-unpack-file(1) Creates a temporary file with a blobs contents. git-var(1) Show a Git logical variable. git-verify-pack(1) Validate packed Git archive files. In general, the interrogate commands do not touch the files in the working tree. Syncing repositories git-daemon(1) A really simple server for Git repositories. git-fetch-pack(1) Receive missing objects from another repository. git-http-backend(1) Server side implementation of Git over HTTP. git-send-pack(1) Push objects over Git protocol to another repository. git-update-server-info(1) Update auxiliary info file to help dumb servers. The following are helper commands used by the above; end users typically do not use them directly. git-http-fetch(1) Download from a remote Git repository via HTTP. git-http-push(1) Push objects over HTTP/DAV to another repository. git-receive-pack(1) Receive what is pushed into the repository. git-shell(1) Restricted login shell for Git-only SSH access. git-upload-archive(1) Send archive back to git-archive. git-upload-pack(1) Send objects packed back to git-fetch-pack. Internal helper commands These are internal helper commands used by other commands; end users typically do not use them directly. git-check-attr(1) Display gitattributes information. git-check-ignore(1) Debug gitignore / exclude files. git-check-mailmap(1) Show canonical names and email addresses of contacts. git-check-ref-format(1) Ensures that a reference name is well formed. git-column(1) Display data in columns. git-credential(1) Retrieve and store user credentials. git-credential-cache(1) Helper to temporarily store passwords in memory. git-credential-store(1) Helper to store credentials on disk. git-fmt-merge-msg(1) Produce a merge commit message. git-hook(1) Run git hooks. git-interpret-trailers(1) Add or parse structured information in commit messages. git-mailinfo(1) Extracts patch and authorship from a single e-mail message. git-mailsplit(1) Simple UNIX mbox splitter program. git-merge-one-file(1) The standard helper program to use with git-merge-index. git-patch-id(1) Compute unique ID for a patch. git-sh-i18n(1) Gits i18n setup code for shell scripts. git-sh-setup(1) Common Git shell script setup code. git-stripspace(1) Remove unnecessary whitespace. GUIDES top The following documentation pages are guides about Git concepts. gitcore-tutorial(7) A Git core tutorial for developers. gitcredentials(7) Providing usernames and passwords to Git. gitcvs-migration(7) Git for CVS users. gitdiffcore(7) Tweaking diff output. giteveryday(7) A useful minimum set of commands for Everyday Git. gitfaq(7) Frequently asked questions about using Git. gitglossary(7) A Git Glossary. gitnamespaces(7) Git namespaces. gitremote-helpers(7) Helper programs to interact with remote repositories. gitsubmodules(7) Mounting one repository inside another. gittutorial(7) A tutorial introduction to Git. gittutorial-2(7) A tutorial introduction to Git: part two. gitworkflows(7) An overview of recommended workflows with Git. REPOSITORY, COMMAND AND FILE INTERFACES top This documentation discusses repository and command interfaces which users are expected to interact with directly. See --user-formats in git-help(1) for more details on the criteria. gitattributes(5) Defining attributes per path. gitcli(7) Git command-line interface and conventions. githooks(5) Hooks used by Git. gitignore(5) Specifies intentionally untracked files to ignore. gitmailmap(5) Map author/committer names and/or E-Mail addresses. gitmodules(5) Defining submodule properties. gitrepository-layout(5) Git Repository Layout. gitrevisions(7) Specifying revisions and ranges for Git. FILE FORMATS, PROTOCOLS AND OTHER DEVELOPER INTERFACES top This documentation discusses file formats, over-the-wire protocols and other git developer interfaces. See --developer-interfaces in git-help(1). gitformat-bundle(5) The bundle file format. gitformat-chunk(5) Chunk-based file formats. gitformat-commit-graph(5) Git commit-graph format. gitformat-index(5) Git index format. gitformat-pack(5) Git pack format. gitformat-signature(5) Git cryptographic signature formats. gitprotocol-capabilities(5) Protocol v0 and v1 capabilities. gitprotocol-common(5) Things common to various protocols. gitprotocol-http(5) Git HTTP-based protocols. gitprotocol-pack(5) How packs are transferred over-the-wire. gitprotocol-v2(5) Git Wire Protocol, Version 2. CONFIGURATION MECHANISM top Git uses a simple text format to store customizations that are per repository and are per user. Such a configuration file may look like this: # # A '#' or ';' character indicates a comment. # ; core variables [core] ; Don't trust file modes filemode = false ; user identity [user] name = "Junio C Hamano" email = "gitster@pobox.com" Various commands read from the configuration file and adjust their operation accordingly. See git-config(1) for a list and more details about the configuration mechanism. IDENTIFIER TERMINOLOGY top <object> Indicates the object name for any type of object. <blob> Indicates a blob object name. <tree> Indicates a tree object name. <commit> Indicates a commit object name. <tree-ish> Indicates a tree, commit or tag object name. A command that takes a <tree-ish> argument ultimately wants to operate on a <tree> object but automatically dereferences <commit> and <tag> objects that point at a <tree>. <commit-ish> Indicates a commit or tag object name. A command that takes a <commit-ish> argument ultimately wants to operate on a <commit> object but automatically dereferences <tag> objects that point at a <commit>. <type> Indicates that an object type is required. Currently one of: blob, tree, commit, or tag. <file> Indicates a filename - almost always relative to the root of the tree structure GIT_INDEX_FILE describes. SYMBOLIC IDENTIFIERS top Any Git command accepting any <object> can also use the following symbolic notation: HEAD indicates the head of the current branch. <tag> a valid tag name (i.e. a refs/tags/<tag> reference). <head> a valid head name (i.e. a refs/heads/<head> reference). For a more complete list of ways to spell object names, see "SPECIFYING REVISIONS" section in gitrevisions(7). FILE/DIRECTORY STRUCTURE top Please see the gitrepository-layout(5) document. Read githooks(5) for more details about each hook. Higher level SCMs may provide and manage additional information in the $GIT_DIR. TERMINOLOGY top Please see gitglossary(7). ENVIRONMENT VARIABLES top Various Git commands pay attention to environment variables and change their behavior. The environment variables marked as "Boolean" take their values the same way as Boolean valued configuration variables, e.g. "true", "yes", "on" and positive numbers are taken as "yes". Here are the variables: The Git Repository These environment variables apply to all core Git commands. Nb: it is worth noting that they may be used/overridden by SCMS sitting above Git so take care if using a foreign front-end. GIT_INDEX_FILE This environment variable specifies an alternate index file. If not specified, the default of $GIT_DIR/index is used. GIT_INDEX_VERSION This environment variable specifies what index version is used when writing the index file out. It wont affect existing index files. By default index file version 2 or 3 is used. See git-update-index(1) for more information. GIT_OBJECT_DIRECTORY If the object storage directory is specified via this environment variable then the sha1 directories are created underneath - otherwise the default $GIT_DIR/objects directory is used. GIT_ALTERNATE_OBJECT_DIRECTORIES Due to the immutable nature of Git objects, old objects can be archived into shared, read-only directories. This variable specifies a ":" separated (on Windows ";" separated) list of Git object directories which can be used to search for Git objects. New objects will not be written to these directories. Entries that begin with " (double-quote) will be interpreted as C-style quoted paths, removing leading and trailing double-quotes and respecting backslash escapes. E.g., the value "path-with-\"-and-:-in-it":vanilla-path has two paths: path-with-"-and-:-in-it and vanilla-path. GIT_DIR If the GIT_DIR environment variable is set then it specifies a path to use instead of the default .git for the base of the repository. The --git-dir command-line option also sets this value. GIT_WORK_TREE Set the path to the root of the working tree. This can also be controlled by the --work-tree command-line option and the core.worktree configuration variable. GIT_NAMESPACE Set the Git namespace; see gitnamespaces(7) for details. The --namespace command-line option also sets this value. GIT_CEILING_DIRECTORIES This should be a colon-separated list of absolute paths. If set, it is a list of directories that Git should not chdir up into while looking for a repository directory (useful for excluding slow-loading network directories). It will not exclude the current working directory or a GIT_DIR set on the command line or in the environment. Normally, Git has to read the entries in this list and resolve any symlink that might be present in order to compare them with the current directory. However, if even this access is slow, you can add an empty entry to the list to tell Git that the subsequent entries are not symlinks and neednt be resolved; e.g., GIT_CEILING_DIRECTORIES=/maybe/symlink::/very/slow/non/symlink. GIT_DISCOVERY_ACROSS_FILESYSTEM When run in a directory that does not have ".git" repository directory, Git tries to find such a directory in the parent directories to find the top of the working tree, but by default it does not cross filesystem boundaries. This Boolean environment variable can be set to true to tell Git not to stop at filesystem boundaries. Like GIT_CEILING_DIRECTORIES, this will not affect an explicit repository directory set via GIT_DIR or on the command line. GIT_COMMON_DIR If this variable is set to a path, non-worktree files that are normally in $GIT_DIR will be taken from this path instead. Worktree-specific files such as HEAD or index are taken from $GIT_DIR. See gitrepository-layout(5) and git-worktree(1) for details. This variable has lower precedence than other path variables such as GIT_INDEX_FILE, GIT_OBJECT_DIRECTORY... GIT_DEFAULT_HASH If this variable is set, the default hash algorithm for new repositories will be set to this value. This value is ignored when cloning and the setting of the remote repository is always used. The default is "sha1". See --object-format in git-init(1). Git Commits GIT_AUTHOR_NAME The human-readable name used in the author identity when creating commit or tag objects, or when writing reflogs. Overrides the user.name and author.name configuration settings. GIT_AUTHOR_EMAIL The email address used in the author identity when creating commit or tag objects, or when writing reflogs. Overrides the user.email and author.email configuration settings. GIT_AUTHOR_DATE The date used for the author identity when creating commit or tag objects, or when writing reflogs. See git-commit(1) for valid formats. GIT_COMMITTER_NAME The human-readable name used in the committer identity when creating commit or tag objects, or when writing reflogs. Overrides the user.name and committer.name configuration settings. GIT_COMMITTER_EMAIL The email address used in the author identity when creating commit or tag objects, or when writing reflogs. Overrides the user.email and committer.email configuration settings. GIT_COMMITTER_DATE The date used for the committer identity when creating commit or tag objects, or when writing reflogs. See git-commit(1) for valid formats. EMAIL The email address used in the author and committer identities if no other relevant environment variable or configuration setting has been set. Git Diffs GIT_DIFF_OPTS Only valid setting is "--unified=??" or "-u??" to set the number of context lines shown when a unified diff is created. This takes precedence over any "-U" or "--unified" option value passed on the Git diff command line. GIT_EXTERNAL_DIFF When the environment variable GIT_EXTERNAL_DIFF is set, the program named by it is called to generate diffs, and Git does not use its builtin diff machinery. For a path that is added, removed, or modified, GIT_EXTERNAL_DIFF is called with 7 parameters: path old-file old-hex old-mode new-file new-hex new-mode where: <old|new>-file are files GIT_EXTERNAL_DIFF can use to read the contents of <old|new>, <old|new>-hex are the 40-hexdigit SHA-1 hashes, <old|new>-mode are the octal representation of the file modes. The file parameters can point at the users working file (e.g. new-file in "git-diff-files"), /dev/null (e.g. old-file when a new file is added), or a temporary file (e.g. old-file in the index). GIT_EXTERNAL_DIFF should not worry about unlinking the temporary file it is removed when GIT_EXTERNAL_DIFF exits. For a path that is unmerged, GIT_EXTERNAL_DIFF is called with 1 parameter, <path>. For each path GIT_EXTERNAL_DIFF is called, two environment variables, GIT_DIFF_PATH_COUNTER and GIT_DIFF_PATH_TOTAL are set. GIT_DIFF_PATH_COUNTER A 1-based counter incremented by one for every path. GIT_DIFF_PATH_TOTAL The total number of paths. other GIT_MERGE_VERBOSITY A number controlling the amount of output shown by the recursive merge strategy. Overrides merge.verbosity. See git-merge(1) GIT_PAGER This environment variable overrides $PAGER. If it is set to an empty string or to the value "cat", Git will not launch a pager. See also the core.pager option in git-config(1). GIT_PROGRESS_DELAY A number controlling how many seconds to delay before showing optional progress indicators. Defaults to 2. GIT_EDITOR This environment variable overrides $EDITOR and $VISUAL. It is used by several Git commands when, on interactive mode, an editor is to be launched. See also git-var(1) and the core.editor option in git-config(1). GIT_SEQUENCE_EDITOR This environment variable overrides the configured Git editor when editing the todo list of an interactive rebase. See also git-rebase(1) and the sequence.editor option in git-config(1). GIT_SSH, GIT_SSH_COMMAND If either of these environment variables is set then git fetch and git push will use the specified command instead of ssh when they need to connect to a remote system. The command-line parameters passed to the configured command are determined by the ssh variant. See ssh.variant option in git-config(1) for details. $GIT_SSH_COMMAND takes precedence over $GIT_SSH, and is interpreted by the shell, which allows additional arguments to be included. $GIT_SSH on the other hand must be just the path to a program (which can be a wrapper shell script, if additional arguments are needed). Usually it is easier to configure any desired options through your personal .ssh/config file. Please consult your ssh documentation for further details. GIT_SSH_VARIANT If this environment variable is set, it overrides Gits autodetection whether GIT_SSH/GIT_SSH_COMMAND/core.sshCommand refer to OpenSSH, plink or tortoiseplink. This variable overrides the config setting ssh.variant that serves the same purpose. GIT_SSL_NO_VERIFY Setting and exporting this environment variable to any value tells Git not to verify the SSL certificate when fetching or pushing over HTTPS. GIT_ATTR_SOURCE Sets the treeish that gitattributes will be read from. GIT_ASKPASS If this environment variable is set, then Git commands which need to acquire passwords or passphrases (e.g. for HTTP or IMAP authentication) will call this program with a suitable prompt as command-line argument and read the password from its STDOUT. See also the core.askPass option in git-config(1). GIT_TERMINAL_PROMPT If this Boolean environment variable is set to false, git will not prompt on the terminal (e.g., when asking for HTTP authentication). GIT_CONFIG_GLOBAL, GIT_CONFIG_SYSTEM Take the configuration from the given files instead from global or system-level configuration files. If GIT_CONFIG_SYSTEM is set, the system config file defined at build time (usually /etc/gitconfig) will not be read. Likewise, if GIT_CONFIG_GLOBAL is set, neither $HOME/.gitconfig nor $XDG_CONFIG_HOME/git/config will be read. Can be set to /dev/null to skip reading configuration files of the respective level. GIT_CONFIG_NOSYSTEM Whether to skip reading settings from the system-wide $(prefix)/etc/gitconfig file. This Boolean environment variable can be used along with $HOME and $XDG_CONFIG_HOME to create a predictable environment for a picky script, or you can set it to true to temporarily avoid using a buggy /etc/gitconfig file while waiting for someone with sufficient permissions to fix it. GIT_FLUSH If this environment variable is set to "1", then commands such as git blame (in incremental mode), git rev-list, git log, git check-attr and git check-ignore will force a flush of the output stream after each record have been flushed. If this variable is set to "0", the output of these commands will be done using completely buffered I/O. If this environment variable is not set, Git will choose buffered or record-oriented flushing based on whether stdout appears to be redirected to a file or not. GIT_TRACE Enables general trace messages, e.g. alias expansion, built-in command execution and external command execution. If this variable is set to "1", "2" or "true" (comparison is case insensitive), trace messages will be printed to stderr. If the variable is set to an integer value greater than 2 and lower than 10 (strictly) then Git will interpret this value as an open file descriptor and will try to write the trace messages into this file descriptor. Alternatively, if the variable is set to an absolute path (starting with a / character), Git will interpret this as a file path and will try to append the trace messages to it. Unsetting the variable, or setting it to empty, "0" or "false" (case insensitive) disables trace messages. GIT_TRACE_FSMONITOR Enables trace messages for the filesystem monitor extension. See GIT_TRACE for available trace output options. GIT_TRACE_PACK_ACCESS Enables trace messages for all accesses to any packs. For each access, the pack file name and an offset in the pack is recorded. This may be helpful for troubleshooting some pack-related performance problems. See GIT_TRACE for available trace output options. GIT_TRACE_PACKET Enables trace messages for all packets coming in or out of a given program. This can help with debugging object negotiation or other protocol issues. Tracing is turned off at a packet starting with "PACK" (but see GIT_TRACE_PACKFILE below). See GIT_TRACE for available trace output options. GIT_TRACE_PACKFILE Enables tracing of packfiles sent or received by a given program. Unlike other trace output, this trace is verbatim: no headers, and no quoting of binary data. You almost certainly want to direct into a file (e.g., GIT_TRACE_PACKFILE=/tmp/my.pack) rather than displaying it on the terminal or mixing it with other trace output. Note that this is currently only implemented for the client side of clones and fetches. GIT_TRACE_PERFORMANCE Enables performance related trace messages, e.g. total execution time of each Git command. See GIT_TRACE for available trace output options. GIT_TRACE_REFS Enables trace messages for operations on the ref database. See GIT_TRACE for available trace output options. GIT_TRACE_SETUP Enables trace messages printing the .git, working tree and current working directory after Git has completed its setup phase. See GIT_TRACE for available trace output options. GIT_TRACE_SHALLOW Enables trace messages that can help debugging fetching / cloning of shallow repositories. See GIT_TRACE for available trace output options. GIT_TRACE_CURL Enables a curl full trace dump of all incoming and outgoing data, including descriptive information, of the git transport protocol. This is similar to doing curl --trace-ascii on the command line. See GIT_TRACE for available trace output options. GIT_TRACE_CURL_NO_DATA When a curl trace is enabled (see GIT_TRACE_CURL above), do not dump data (that is, only dump info lines and headers). GIT_TRACE2 Enables more detailed trace messages from the "trace2" library. Output from GIT_TRACE2 is a simple text-based format for human readability. If this variable is set to "1", "2" or "true" (comparison is case insensitive), trace messages will be printed to stderr. If the variable is set to an integer value greater than 2 and lower than 10 (strictly) then Git will interpret this value as an open file descriptor and will try to write the trace messages into this file descriptor. Alternatively, if the variable is set to an absolute path (starting with a / character), Git will interpret this as a file path and will try to append the trace messages to it. If the path already exists and is a directory, the trace messages will be written to files (one per process) in that directory, named according to the last component of the SID and an optional counter (to avoid filename collisions). In addition, if the variable is set to af_unix:[<socket_type>:]<absolute-pathname>, Git will try to open the path as a Unix Domain Socket. The socket type can be either stream or dgram. Unsetting the variable, or setting it to empty, "0" or "false" (case insensitive) disables trace messages. See Trace2 documentation[2] for full details. GIT_TRACE2_EVENT This setting writes a JSON-based format that is suited for machine interpretation. See GIT_TRACE2 for available trace output options and Trace2 documentation[2] for full details. GIT_TRACE2_PERF In addition to the text-based messages available in GIT_TRACE2, this setting writes a column-based format for understanding nesting regions. See GIT_TRACE2 for available trace output options and Trace2 documentation[2] for full details. GIT_TRACE_REDACT By default, when tracing is activated, Git redacts the values of cookies, the "Authorization:" header, the "Proxy-Authorization:" header and packfile URIs. Set this Boolean environment variable to false to prevent this redaction. GIT_LITERAL_PATHSPECS Setting this Boolean environment variable to true will cause Git to treat all pathspecs literally, rather than as glob patterns. For example, running GIT_LITERAL_PATHSPECS=1 git log -- '*.c' will search for commits that touch the path *.c, not any paths that the glob *.c matches. You might want this if you are feeding literal paths to Git (e.g., paths previously given to you by git ls-tree, --raw diff output, etc). GIT_GLOB_PATHSPECS Setting this Boolean environment variable to true will cause Git to treat all pathspecs as glob patterns (aka "glob" magic). GIT_NOGLOB_PATHSPECS Setting this Boolean environment variable to true will cause Git to treat all pathspecs as literal (aka "literal" magic). GIT_ICASE_PATHSPECS Setting this Boolean environment variable to true will cause Git to treat all pathspecs as case-insensitive. GIT_REFLOG_ACTION When a ref is updated, reflog entries are created to keep track of the reason why the ref was updated (which is typically the name of the high-level command that updated the ref), in addition to the old and new values of the ref. A scripted Porcelain command can use set_reflog_action helper function in git-sh-setup to set its name to this variable when it is invoked as the top level command by the end user, to be recorded in the body of the reflog. GIT_REF_PARANOIA If this Boolean environment variable is set to false, ignore broken or badly named refs when iterating over lists of refs. Normally Git will try to include any such refs, which may cause some operations to fail. This is usually preferable, as potentially destructive operations (e.g., git-prune(1)) are better off aborting rather than ignoring broken refs (and thus considering the history they point to as not worth saving). The default value is 1 (i.e., be paranoid about detecting and aborting all operations). You should not normally need to set this to 0, but it may be useful when trying to salvage data from a corrupted repository. GIT_COMMIT_GRAPH_PARANOIA When loading a commit object from the commit-graph, Git performs an existence check on the object in the object database. This is done to avoid issues with stale commit-graphs that contain references to already-deleted commits, but comes with a performance penalty. The default is "false", which disables the aforementioned behavior. Setting this to "true" enables the existence check so that stale commits will never be returned from the commit-graph at the cost of performance. GIT_ALLOW_PROTOCOL If set to a colon-separated list of protocols, behave as if protocol.allow is set to never, and each of the listed protocols has protocol.<name>.allow set to always (overriding any existing configuration). See the description of protocol.allow in git-config(1) for more details. GIT_PROTOCOL_FROM_USER Set this Boolean environment variable to false to prevent protocols used by fetch/push/clone which are configured to the user state. This is useful to restrict recursive submodule initialization from an untrusted repository or for programs which feed potentially-untrusted URLS to git commands. See git-config(1) for more details. GIT_PROTOCOL For internal use only. Used in handshaking the wire protocol. Contains a colon : separated list of keys with optional values key[=value]. Presence of unknown keys and values must be ignored. Note that servers may need to be configured to allow this variable to pass over some transports. It will be propagated automatically when accessing local repositories (i.e., file:// or a filesystem path), as well as over the git:// protocol. For git-over-http, it should work automatically in most configurations, but see the discussion in git-http-backend(1). For git-over-ssh, the ssh server may need to be configured to allow clients to pass this variable (e.g., by using AcceptEnv GIT_PROTOCOL with OpenSSH). This configuration is optional. If the variable is not propagated, then clients will fall back to the original "v0" protocol (but may miss out on some performance improvements or features). This variable currently only affects clones and fetches; it is not yet used for pushes (but may be in the future). GIT_OPTIONAL_LOCKS If this Boolean environment variable is set to false, Git will complete any requested operation without performing any optional sub-operations that require taking a lock. For example, this will prevent git status from refreshing the index as a side effect. This is useful for processes running in the background which do not want to cause lock contention with other operations on the repository. Defaults to 1. GIT_REDIRECT_STDIN, GIT_REDIRECT_STDOUT, GIT_REDIRECT_STDERR Windows-only: allow redirecting the standard input/output/error handles to paths specified by the environment variables. This is particularly useful in multi-threaded applications where the canonical way to pass standard handles via CreateProcess() is not an option because it would require the handles to be marked inheritable (and consequently every spawned process would inherit them, possibly blocking regular Git operations). The primary intended use case is to use named pipes for communication (e.g. \\.\pipe\my-git-stdin-123). Two special values are supported: off will simply close the corresponding standard handle, and if GIT_REDIRECT_STDERR is 2>&1, standard error will be redirected to the same handle as standard output. GIT_PRINT_SHA1_ELLIPSIS (deprecated) If set to yes, print an ellipsis following an (abbreviated) SHA-1 value. This affects indications of detached HEADs ( git-checkout(1)) and the raw diff output (git-diff(1)). Printing an ellipsis in the cases mentioned is no longer considered adequate and support for it is likely to be removed in the foreseeable future (along with the variable). DISCUSSION top More detail on the following is available from the Git concepts chapter of the user-manual[3] and gitcore-tutorial(7). A Git project normally consists of a working directory with a ".git" subdirectory at the top level. The .git directory contains, among other things, a compressed object database representing the complete history of the project, an "index" file which links that history to the current contents of the working tree, and named pointers into that history such as tags and branch heads. The object database contains objects of three main types: blobs, which hold file data; trees, which point to blobs and other trees to build up directory hierarchies; and commits, which each reference a single tree and some number of parent commits. The commit, equivalent to what other systems call a "changeset" or "version", represents a step in the projects history, and each parent represents an immediately preceding step. Commits with more than one parent represent merges of independent lines of development. All objects are named by the SHA-1 hash of their contents, normally written as a string of 40 hex digits. Such names are globally unique. The entire history leading up to a commit can be vouched for by signing just that commit. A fourth object type, the tag, is provided for this purpose. When first created, objects are stored in individual files, but for efficiency may later be compressed together into "pack files". Named pointers called refs mark interesting points in history. A ref may contain the SHA-1 name of an object or the name of another ref. Refs with names beginning ref/head/ contain the SHA-1 name of the most recent commit (or "head") of a branch under development. SHA-1 names of tags of interest are stored under ref/tags/. A special ref named HEAD contains the name of the currently checked-out branch. The index file is initialized with a list of all paths and, for each path, a blob object and a set of attributes. The blob object represents the contents of the file as of the head of the current branch. The attributes (last modified time, size, etc.) are taken from the corresponding file in the working tree. Subsequent changes to the working tree can be found by comparing these attributes. The index may be updated with new content, and new commits may be created from the content stored in the index. The index is also capable of storing multiple entries (called "stages") for a given pathname. These stages are used to hold the various unmerged version of a file when a merge is in progress. FURTHER DOCUMENTATION top See the references in the "description" section to get started using Git. The following is probably more detail than necessary for a first-time user. The Git concepts chapter of the user-manual[3] and gitcore-tutorial(7) both provide introductions to the underlying Git architecture. See gitworkflows(7) for an overview of recommended workflows. See also the howto[4] documents for some useful examples. The internals are documented in the Git API documentation[5]. Users migrating from CVS may also want to read gitcvs-migration(7). AUTHORS top Git was started by Linus Torvalds, and is currently maintained by Junio C Hamano. Numerous contributions have come from the Git mailing list <git@vger.kernel.org[6]>. https://openhub.net/p/git/contributors/summary gives you a more complete list of contributors. If you have a clone of git.git itself, the output of git-shortlog(1) and git-blame(1) can show you the authors for specific parts of the project. REPORTING BUGS top Report bugs to the Git mailing list <git@vger.kernel.org[6]> where the development and maintenance is primarily done. You do not have to be subscribed to the list to send a message there. See the list archive at https://lore.kernel.org/git for previous bug reports and other discussions. Issues which are security relevant should be disclosed privately to the Git Security mailing list <git-security@googlegroups.com[7]>. SEE ALSO top gittutorial(7), gittutorial-2(7), giteveryday(7), gitcvs-migration(7), gitglossary(7), gitcore-tutorial(7), gitcli(7), The Git Users Manual[1], gitworkflows(7) GIT top Part of the git(1) suite NOTES top 1. Git Users Manual file:///home/mtk/share/doc/git-doc/user-manual.html 2. Trace2 documentation file:///home/mtk/share/doc/git-doc/technical/api-trace2.html 3. Git concepts chapter of the user-manual file:///home/mtk/share/doc/git-doc/user-manual.html#git-concepts 4. howto file:///home/mtk/share/doc/git-doc/howto-index.html 5. Git API documentation file:///home/mtk/share/doc/git-doc/technical/api-index.html 6. git@vger.kernel.org mailto:git@vger.kernel.org 7. git-security@googlegroups.com mailto:git-security@googlegroups.com COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT(1) Pages that refer to this page: git(1), git-add(1), git-am(1), git-annotate(1), git-apply(1), git-archimport(1), git-archive(1), git-bisect(1), git-blame(1), git-branch(1), git-bugreport(1), git-bundle(1), git-cat-file(1), git-check-attr(1), git-check-ignore(1), git-check-mailmap(1), git-checkout(1), git-checkout-index(1), git-check-ref-format(1), git-cherry(1), git-cherry-pick(1), git-citool(1), git-clean(1), git-clone(1), git-column(1), git-commit(1), git-commit-graph(1), git-commit-tree(1), git-config(1), git-count-objects(1), git-credential(1), git-credential-cache(1), git-credential-cache--daemon(1), git-credential-store(1), git-cvsexportcommit(1), git-cvsimport(1), git-cvsserver(1), git-daemon(1), git-describe(1), git-diagnose(1), git-diff(1), git-diff-files(1), git-diff-index(1), git-difftool(1), git-diff-tree(1), git-fast-export(1), git-fast-import(1), git-fetch(1), git-fetch-pack(1), git-filter-branch(1), git-fmt-merge-msg(1), git-for-each-ref(1), git-for-each-repo(1), git-format-patch(1), git-fsck(1), git-fsck-objects(1), git-fsmonitor--daemon(1), git-gc(1), git-get-tar-commit-id(1), git-grep(1), git-gui(1), git-hash-object(1), git-help(1), git-hook(1), git-http-backend(1), git-http-fetch(1), git-http-push(1), git-imap-send(1), git-index-pack(1), git-init(1), git-init-db(1), git-instaweb(1), git-interpret-trailers(1), gitk(1), git-log(1), git-ls-files(1), git-ls-remote(1), git-ls-tree(1), git-mailinfo(1), git-mailsplit(1), git-maintenance(1), git-merge(1), git-merge-base(1), git-merge-file(1), git-merge-index(1), git-merge-one-file(1), git-mergetool(1), git-mergetool--lib(1), git-merge-tree(1), git-mktag(1), git-mktree(1), git-multi-pack-index(1), git-mv(1), git-name-rev(1), git-notes(1), git-p4(1), git-pack-objects(1), git-pack-redundant(1), git-pack-refs(1), git-patch-id(1), git-prune(1), git-prune-packed(1), git-pull(1), git-push(1), git-quiltimport(1), git-range-diff(1), git-read-tree(1), git-rebase(1), git-receive-pack(1), git-reflog(1), git-remote(1), git-remote-ext(1), git-remote-fd(1), git-repack(1), git-replace(1), git-replay(1), git-request-pull(1), git-rerere(1), git-reset(1), git-restore(1), git-revert(1), git-rev-list(1), git-rev-parse(1), git-rm(1), git-send-email(1), git-send-pack(1), git-series(1), git-shell(1), git-sh-i18n(1), git-sh-i18n--envsubst(1), git-shortlog(1), git-show(1), git-show-branch(1), git-show-index(1), git-show-ref(1), git-sh-setup(1), git-sparse-checkout(1), git-stage(1), git-stash(1), git-status(1), git-stripspace(1), git-submodule(1), git-svn(1), git-switch(1), git-symbolic-ref(1), git-tag(1), git-unpack-file(1), git-unpack-objects(1), git-update-index(1), git-update-ref(1), git-update-server-info(1), git-upload-archive(1), git-upload-pack(1), git-var(1), git-verify-commit(1), git-verify-pack(1), git-verify-tag(1), git-version(1), gitweb(1), git-web--browse(1), git-whatchanged(1), git-worktree(1), git-write-tree(1), scalar(1), gitattributes(5), gitformat-bundle(5), gitformat-chunk(5), gitformat-commit-graph(5), gitformat-index(5), gitformat-pack(5), gitformat-signature(5), githooks(5), gitignore(5), gitmailmap(5), gitmodules(5), gitprotocol-capabilities(5), gitprotocol-common(5), gitprotocol-http(5), gitprotocol-pack(5), gitprotocol-v2(5), gitrepository-layout(5), gitweb.conf(5), gitcli(7), gitcore-tutorial(7), gitcredentials(7), gitcvs-migration(7), gitdiffcore(7), giteveryday(7), gitfaq(7), gitglossary(7), gitnamespaces(7), gitremote-helpers(7), gitrevisions(7), gitsubmodules(7), gittutorial-2(7), gittutorial(7), gitworkflows(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git\n\n> Distributed version control system.\n> Some subcommands such as `commit`, `add`, `branch`, `checkout`, `push`, etc. have their own usage documentation.\n> More information: <https://git-scm.com/>.\n\n- Execute a Git subcommand:\n\n`git {{subcommand}}`\n\n- Execute a Git subcommand on a custom repository root path:\n\n`git -C {{path/to/repo}} {{subcommand}}`\n\n- Execute a Git subcommand with a given configuration set:\n\n`git -c '{{config.key}}={{value}}' {{subcommand}}`\n\n- Display help:\n\n`git --help`\n\n- Display help for a specific subcommand (like `clone`, `add`, `push`, `log`, etc.):\n\n`git help {{subcommand}}`\n\n- Display version:\n\n`git --version`\n
git-add
git-add(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-add(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | INTERACTIVE MODE | EDITING PATCHES | CONFIGURATION | SEE ALSO | GIT | COLOPHON GIT-ADD(1) Git Manual GIT-ADD(1) NAME top git-add - Add file contents to the index SYNOPSIS top git add [--verbose | -v] [--dry-run | -n] [--force | -f] [--interactive | -i] [--patch | -p] [--edit | -e] [--[no-]all | --[no-]ignore-removal | [--update | -u]] [--sparse] [--intent-to-add | -N] [--refresh] [--ignore-errors] [--ignore-missing] [--renormalize] [--chmod=(+|-)x] [--pathspec-from-file=<file> [--pathspec-file-nul]] [--] [<pathspec>...] DESCRIPTION top This command updates the index using the current content found in the working tree, to prepare the content staged for the next commit. It typically adds the current content of existing paths as a whole, but with some options it can also be used to add content with only part of the changes made to the working tree files applied, or remove paths that do not exist in the working tree anymore. The "index" holds a snapshot of the content of the working tree, and it is this snapshot that is taken as the contents of the next commit. Thus after making any changes to the working tree, and before running the commit command, you must use the add command to add any new or modified files to the index. This command can be performed multiple times before a commit. It only adds the content of the specified file(s) at the time the add command is run; if you want subsequent changes included in the next commit, then you must run git add again to add the new content to the index. The git status command can be used to obtain a summary of which files have changes that are staged for the next commit. The git add command will not add ignored files by default. If any ignored files were explicitly specified on the command line, git add will fail with a list of ignored files. Ignored files reached by directory recursion or filename globbing performed by Git (quote your globs before the shell) will be silently ignored. The git add command can be used to add ignored files with the -f (force) option. Please see git-commit(1) for alternative ways to add content to a commit. OPTIONS top <pathspec>... Files to add content from. Fileglobs (e.g. *.c) can be given to add all matching files. Also a leading directory name (e.g. dir to add dir/file1 and dir/file2) can be given to update the index to match the current state of the directory as a whole (e.g. specifying dir will record not just a file dir/file1 modified in the working tree, a file dir/file2 added to the working tree, but also a file dir/file3 removed from the working tree). Note that older versions of Git used to ignore removed files; use --no-all option if you want to add modified or new files but ignore removed ones. For more details about the <pathspec> syntax, see the pathspec entry in gitglossary(7). -n, --dry-run Dont actually add the file(s), just show if they exist and/or will be ignored. -v, --verbose Be verbose. -f, --force Allow adding otherwise ignored files. --sparse Allow updating index entries outside of the sparse-checkout cone. Normally, git add refuses to update index entries whose paths do not fit within the sparse-checkout cone, since those files might be removed from the working tree without warning. See git-sparse-checkout(1) for more details. -i, --interactive Add modified contents in the working tree interactively to the index. Optional path arguments may be supplied to limit operation to a subset of the working tree. See Interactive mode for details. -p, --patch Interactively choose hunks of patch between the index and the work tree and add them to the index. This gives the user a chance to review the difference before adding modified contents to the index. This effectively runs add --interactive, but bypasses the initial command menu and directly jumps to the patch subcommand. See Interactive mode for details. -e, --edit Open the diff vs. the index in an editor and let the user edit it. After the editor was closed, adjust the hunk headers and apply the patch to the index. The intent of this option is to pick and choose lines of the patch to apply, or even to modify the contents of lines to be staged. This can be quicker and more flexible than using the interactive hunk selector. However, it is easy to confuse oneself and create a patch that does not apply to the index. See EDITING PATCHES below. -u, --update Update the index just where it already has an entry matching <pathspec>. This removes as well as modifies index entries to match the working tree, but adds no new files. If no <pathspec> is given when -u option is used, all tracked files in the entire working tree are updated (old versions of Git used to limit the update to the current directory and its subdirectories). -A, --all, --no-ignore-removal Update the index not only where the working tree has a file matching <pathspec> but also where the index already has an entry. This adds, modifies, and removes index entries to match the working tree. If no <pathspec> is given when -A option is used, all files in the entire working tree are updated (old versions of Git used to limit the update to the current directory and its subdirectories). --no-all, --ignore-removal Update the index by adding new files that are unknown to the index and files modified in the working tree, but ignore files that have been removed from the working tree. This option is a no-op when no <pathspec> is used. This option is primarily to help users who are used to older versions of Git, whose "git add <pathspec>..." was a synonym for "git add --no-all <pathspec>...", i.e. ignored removed files. -N, --intent-to-add Record only the fact that the path will be added later. An entry for the path is placed in the index with no content. This is useful for, among other things, showing the unstaged content of such files with git diff and committing them with git commit -a. --refresh Dont add the file(s), but only refresh their stat() information in the index. --ignore-errors If some files could not be added because of errors indexing them, do not abort the operation, but continue adding the others. The command shall still exit with non-zero status. The configuration variable add.ignoreErrors can be set to true to make this the default behaviour. --ignore-missing This option can only be used together with --dry-run. By using this option the user can check if any of the given files would be ignored, no matter if they are already present in the work tree or not. --no-warn-embedded-repo By default, git add will warn when adding an embedded repository to the index without using git submodule add to create an entry in .gitmodules. This option will suppress the warning (e.g., if you are manually performing operations on submodules). --renormalize Apply the "clean" process freshly to all tracked files to forcibly add them again to the index. This is useful after changing core.autocrlf configuration or the text attribute in order to correct files added with wrong CRLF/LF line endings. This option implies -u. Lone CR characters are untouched, thus while a CRLF cleans to LF, a CRCRLF sequence is only partially cleaned to CRLF. --chmod=(+|-)x Override the executable bit of the added files. The executable bit is only changed in the index, the files on disk are left unchanged. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -- This option can be used to separate command-line options from the list of files, (useful when filenames might be mistaken for command-line options). EXAMPLES top Adds content from all *.txt files under Documentation directory and its subdirectories: $ git add Documentation/\*.txt Note that the asterisk * is quoted from the shell in this example; this lets the command include the files from subdirectories of Documentation/ directory. Considers adding content from all git-*.sh scripts: $ git add git-*.sh Because this example lets the shell expand the asterisk (i.e. you are listing the files explicitly), it does not consider subdir/git-foo.sh. INTERACTIVE MODE top When the command enters the interactive mode, it shows the output of the status subcommand, and then goes into its interactive command loop. The command loop shows the list of subcommands available, and gives a prompt "What now> ". In general, when the prompt ends with a single >, you can pick only one of the choices given and type return, like this: *** Commands *** 1: status 2: update 3: revert 4: add untracked 5: patch 6: diff 7: quit 8: help What now> 1 You also could say s or sta or status above as long as the choice is unique. The main command loop has 6 subcommands (plus help and quit). status This shows the change between HEAD and index (i.e. what will be committed if you say git commit), and between index and working tree files (i.e. what you could stage further before git commit using git add) for each path. A sample output looks like this: staged unstaged path 1: binary nothing foo.png 2: +403/-35 +1/-1 add-interactive.c It shows that foo.png has differences from HEAD (but that is binary so line count cannot be shown) and there is no difference between indexed copy and the working tree version (if the working tree version were also different, binary would have been shown in place of nothing). The other file, add-interactive.c, has 403 lines added and 35 lines deleted if you commit what is in the index, but working tree file has further modifications (one addition and one deletion). update This shows the status information and issues an "Update>>" prompt. When the prompt ends with double >>, you can make more than one selection, concatenated with whitespace or comma. Also you can say ranges. E.g. "2-5 7,9" to choose 2,3,4,5,7,9 from the list. If the second number in a range is omitted, all remaining patches are taken. E.g. "7-" to choose 7,8,9 from the list. You can say * to choose everything. What you chose are then highlighted with *, like this: staged unstaged path 1: binary nothing foo.png * 2: +403/-35 +1/-1 add-interactive.c To remove selection, prefix the input with - like this: Update>> -2 After making the selection, answer with an empty line to stage the contents of working tree files for selected paths in the index. revert This has a very similar UI to update, and the staged information for selected paths are reverted to that of the HEAD version. Reverting new paths makes them untracked. add untracked This has a very similar UI to update and revert, and lets you add untracked paths to the index. patch This lets you choose one path out of a status like selection. After choosing the path, it presents the diff between the index and the working tree file and asks you if you want to stage the change of each hunk. You can select one of the following options and type return: y - stage this hunk n - do not stage this hunk q - quit; do not stage this hunk or any of the remaining ones a - stage this hunk and all later hunks in the file d - do not stage this hunk or any of the later hunks in the file g - select a hunk to go to / - search for a hunk matching the given regex j - leave this hunk undecided, see next undecided hunk J - leave this hunk undecided, see next hunk k - leave this hunk undecided, see previous undecided hunk K - leave this hunk undecided, see previous hunk s - split the current hunk into smaller hunks e - manually edit the current hunk ? - print help After deciding the fate for all hunks, if there is any hunk that was chosen, the index is updated with the selected hunks. You can omit having to type return here, by setting the configuration variable interactive.singleKey to true. diff This lets you review what will be committed (i.e. between HEAD and index). EDITING PATCHES top Invoking git add -e or selecting e from the interactive hunk selector will open a patch in your editor; after the editor exits, the result is applied to the index. You are free to make arbitrary changes to the patch, but note that some changes may have confusing results, or even result in a patch that cannot be applied. If you want to abort the operation entirely (i.e., stage nothing new in the index), simply delete all lines of the patch. The list below describes some common things you may see in a patch, and which editing operations make sense on them. added content Added content is represented by lines beginning with "+". You can prevent staging any addition lines by deleting them. removed content Removed content is represented by lines beginning with "-". You can prevent staging their removal by converting the "-" to a " " (space). modified content Modified content is represented by "-" lines (removing the old content) followed by "+" lines (adding the replacement content). You can prevent staging the modification by converting "-" lines to " ", and removing "+" lines. Beware that modifying only half of the pair is likely to introduce confusing changes to the index. There are also more complex operations that can be performed. But beware that because the patch is applied only to the index and not the working tree, the working tree will appear to "undo" the change in the index. For example, introducing a new line into the index that is in neither the HEAD nor the working tree will stage the new line for commit, but the line will appear to be reverted in the working tree. Avoid using these constructs, or do so with extreme caution. removing untouched content Content which does not differ between the index and working tree may be shown on context lines, beginning with a " " (space). You can stage context lines for removal by converting the space to a "-". The resulting working tree file will appear to re-add the content. modifying existing content One can also modify context lines by staging them for removal (by converting " " to "-") and adding a "+" line with the new content. Similarly, one can modify "+" lines for existing additions or modifications. In all cases, the new modification will appear reverted in the working tree. new content You may also add new content that does not exist in the patch; simply add new lines, each starting with "+". The addition will appear reverted in the working tree. There are also several operations which should be avoided entirely, as they will make the patch impossible to apply: adding context (" ") or removal ("-") lines deleting context or removal lines modifying the contents of context or removal lines CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: add.ignoreErrors, add.ignore-errors (deprecated) Tells git add to continue adding files when some files cannot be added due to indexing errors. Equivalent to the --ignore-errors option of git-add(1). add.ignore-errors is deprecated, as it does not follow the usual naming convention for configuration variables. add.interactive.useBuiltin Unused configuration variable. Used in Git versions v2.25.0 to v2.36.0 to enable the built-in version of git-add(1)'s interactive mode, which then became the default in Git versions v2.37.0 to v2.39.0. SEE ALSO top git-status(1) git-rm(1) git-reset(1) git-mv(1) git-commit(1) git-update-index(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-ADD(1) Pages that refer to this page: git(1), git-add(1), git-apply(1), git-checkout(1), git-commit(1), git-config(1), git-diff(1), git-merge(1), git-reset(1), git-restore(1), git-rm(1), git-stage(1), git-stash(1), git-update-index(1), giteveryday(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git add\n\n> Adds changed files to the index.\n> More information: <https://git-scm.com/docs/git-add>.\n\n- Add a file to the index:\n\n`git add {{path/to/file}}`\n\n- Add all files (tracked and untracked):\n\n`git add -A`\n\n- Only add already tracked files:\n\n`git add -u`\n\n- Also add ignored files:\n\n`git add -f`\n\n- Interactively stage parts of files:\n\n`git add -p`\n\n- Interactively stage parts of a given file:\n\n`git add -p {{path/to/file}}`\n\n- Interactively stage a file:\n\n`git add -i`\n
git-am
git-am(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-am(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | DISCUSSION | HOOKS | CONFIGURATION | SEE ALSO | GIT | COLOPHON GIT-AM(1) Git Manual GIT-AM(1) NAME top git-am - Apply a series of patches from a mailbox SYNOPSIS top git am [--signoff] [--keep] [--[no-]keep-cr] [--[no-]utf8] [--no-verify] [--[no-]3way] [--interactive] [--committer-date-is-author-date] [--ignore-date] [--ignore-space-change | --ignore-whitespace] [--whitespace=<action>] [-C<n>] [-p<n>] [--directory=<dir>] [--exclude=<path>] [--include=<path>] [--reject] [-q | --quiet] [--[no-]scissors] [-S[<keyid>]] [--patch-format=<format>] [--quoted-cr=<action>] [--empty=(stop|drop|keep)] [(<mbox> | <Maildir>)...] git am (--continue | --skip | --abort | --quit | --show-current-patch[=(diff|raw)] | --allow-empty) DESCRIPTION top Splits mail messages in a mailbox into commit log messages, authorship information, and patches, and applies them to the current branch. You could think of it as a reverse operation of git-format-patch(1) run on a branch with a straight history without merges. OPTIONS top (<mbox>|<Maildir>)... The list of mailbox files to read patches from. If you do not supply this argument, the command reads from the standard input. If you supply directories, they will be treated as Maildirs. -s, --signoff Add a Signed-off-by trailer to the commit message, using the committer identity of yourself. See the signoff option in git-commit(1) for more information. -k, --keep Pass -k flag to git mailinfo (see git-mailinfo(1)). --keep-non-patch Pass -b flag to git mailinfo (see git-mailinfo(1)). --[no-]keep-cr With --keep-cr, call git mailsplit (see git-mailsplit(1)) with the same option, to prevent it from stripping CR at the end of lines. am.keepcr configuration variable can be used to specify the default behaviour. --no-keep-cr is useful to override am.keepcr. -c, --scissors Remove everything in body before a scissors line (see git-mailinfo(1)). Can be activated by default using the mailinfo.scissors configuration variable. --no-scissors Ignore scissors lines (see git-mailinfo(1)). --quoted-cr=<action> This flag will be passed down to git mailinfo (see git-mailinfo(1)). --empty=(stop|drop|keep) By default, or when the option is set to stop, the command errors out on an input e-mail message lacking a patch and stops in the middle of the current am session. When this option is set to drop, skip such an e-mail message instead. When this option is set to keep, create an empty commit, recording the contents of the e-mail message as its log. -m, --message-id Pass the -m flag to git mailinfo (see git-mailinfo(1)), so that the Message-ID header is added to the commit message. The am.messageid configuration variable can be used to specify the default behaviour. --no-message-id Do not add the Message-ID header to the commit message. no-message-id is useful to override am.messageid. -q, --quiet Be quiet. Only print error messages. -u, --utf8 Pass -u flag to git mailinfo (see git-mailinfo(1)). The proposed commit log message taken from the e-mail is re-coded into UTF-8 encoding (configuration variable i18n.commitEncoding can be used to specify the projects preferred encoding if it is not UTF-8). This was optional in prior versions of git, but now it is the default. You can use --no-utf8 to override this. --no-utf8 Pass -n flag to git mailinfo (see git-mailinfo(1)). -3, --3way, --no-3way When the patch does not apply cleanly, fall back on 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally. --no-3way can be used to override am.threeWay configuration variable. For more information, see am.threeWay in git-config(1). --rerere-autoupdate, --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. --no-rerere-autoupdate is a good way to double-check what rerere did and catch potential mismerges, before committing the result to the index with a separate git add. --ignore-space-change, --ignore-whitespace, --whitespace=<action>, -C<n>, -p<n>, --directory=<dir>, --exclude=<path>, --include=<path>, --reject These flags are passed to the git apply (see git-apply(1)) program that applies the patch. --patch-format By default the command will try to detect the patch format automatically. This option allows the user to bypass the automatic detection and specify the patch format that the patch(es) should be interpreted as. Valid formats are mbox, mboxrd, stgit, stgit-series, and hg. -i, --interactive Run interactively. -n, --no-verify By default, the pre-applypatch and applypatch-msg hooks are run. When any of --no-verify or -n is given, these are bypassed. See also githooks(5). --committer-date-is-author-date By default the command records the date from the e-mail message as the commit author date, and uses the time of commit creation as the committer date. This allows the user to lie about the committer date by using the same value as the author date. --ignore-date By default the command records the date from the e-mail message as the commit author date, and uses the time of commit creation as the committer date. This allows the user to lie about the author date by using the same value as the committer date. --skip Skip the current patch. This is only meaningful when restarting an aborted patch. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. --continue, -r, --resolved After a patch failure (e.g. attempting to apply conflicting patch), the user has applied it by hand and the index file stores the result of the application. Make a commit using the authorship and commit log extracted from the e-mail message and the current index file, and continue. --resolvemsg=<msg> When a patch failure occurs, <msg> will be printed to the screen before exiting. This overrides the standard message informing you to use --continue or --skip to handle the failure. This is solely for internal use between git rebase and git am. --abort Restore the original branch and abort the patching operation. Revert the contents of files involved in the am operation to their pre-am state. --quit Abort the patching operation but keep HEAD and the index untouched. --show-current-patch[=(diff|raw)] Show the message at which git am has stopped due to conflicts. If raw is specified, show the raw contents of the e-mail message; if diff, show the diff portion only. Defaults to raw. --allow-empty After a patch failure on an input e-mail message lacking a patch, create an empty commit with the contents of the e-mail message as its log message. DISCUSSION top The commit author name is taken from the "From: " line of the message, and commit author date is taken from the "Date: " line of the message. The "Subject: " line is used as the title of the commit, after stripping common prefix "[PATCH <anything>]". The "Subject: " line is supposed to concisely describe what the commit is about in one line of text. "From: ", "Date: ", and "Subject: " lines starting the body override the respective commit author name and title values taken from the headers. The commit message is formed by the title taken from the "Subject: ", a blank line and the body of the message up to where the patch begins. Excess whitespace at the end of each line is automatically stripped. The patch is expected to be inline, directly following the message. Any line that is of the form: three-dashes and end-of-line, or a line that begins with "diff -", or a line that begins with "Index: " is taken as the beginning of a patch, and the commit log message is terminated before the first occurrence of such a line. When initially invoking git am, you give it the names of the mailboxes to process. Upon seeing the first patch that does not apply, it aborts in the middle. You can recover from this in one of two ways: 1. skip the current patch by re-running the command with the --skip option. 2. hand resolve the conflict in the working directory, and update the index file to bring it into a state that the patch should have produced. Then run the command with the --continue option. The command refuses to process new mailboxes until the current operation is finished, so if you decide to start over from scratch, run git am --abort before running the command with mailbox names. Before any patches are applied, ORIG_HEAD is set to the tip of the current branch. This is useful if you have problems with multiple commits, like running git am on the wrong branch or an error in the commits that is more easily fixed by changing the mailbox (e.g. errors in the "From:" lines). HOOKS top This command can run applypatch-msg, pre-applypatch, and post-applypatch hooks. See githooks(5) for more information. CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: am.keepcr If true, git-am will call git-mailsplit for patches in mbox format with parameter --keep-cr. In this case git-mailsplit will not remove \r from lines ending with \r\n. Can be overridden by giving --no-keep-cr from the command line. See git-am(1), git-mailsplit(1). am.threeWay By default, git am will fail if the patch does not apply cleanly. When set to true, this setting tells git am to fall back on 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally (equivalent to giving the --3way option from the command line). Defaults to false. See git-am(1). SEE ALSO top git-apply(1), git-format-patch(1). GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-AM(1) Pages that refer to this page: git(1), git-am(1), git-apply(1), git-cherry(1), git-config(1), git-format-patch(1), git-mailinfo(1), gitweb(1), githooks(5), giteveryday(7), gittutorial(7), gitworkflows(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git am\n\n> Apply patch files and create a commit. Useful when receiving commits via email.\n> See also `git format-patch`, which can generate patch files.\n> More information: <https://git-scm.com/docs/git-am>.\n\n- Apply and commit changes following a local patch file:\n\n`git am {{path/to/file.patch}}`\n\n- Apply and commit changes following a remote patch file:\n\n`curl -L {{https://example.com/file.patch}} | git apply`\n\n- Abort the process of applying a patch file:\n\n`git am --abort`\n\n- Apply as much of a patch file as possible, saving failed hunks to reject files:\n\n`git am --reject {{path/to/file.patch}}`\n
git-annotate
git-annotate(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-annotate(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | GIT | COLOPHON GIT-ANNOTATE(1) Git Manual GIT-ANNOTATE(1) NAME top git-annotate - Annotate file lines with commit information SYNOPSIS top git annotate [<options>] [<rev-opts>] [<rev>] [--] <file> DESCRIPTION top Annotates each line in the given file with information from the commit which introduced the line. Optionally annotates from a given revision. The only difference between this command and git-blame(1) is that they use slightly different output formats, and this command exists only for backward compatibility to support existing scripts, and provide a more familiar command name for people coming from other SCM systems. OPTIONS top -b Show blank SHA-1 for boundary commits. This can also be controlled via the blame.blankBoundary config option. --root Do not treat root commits as boundaries. This can also be controlled via the blame.showRoot config option. --show-stats Include additional statistics at the end of blame output. -L <start>,<end>, -L :<funcname> Annotate only the line range given by <start>,<end>, or by the function name regex <funcname>. May be specified multiple times. Overlapping ranges are allowed. <start> and <end> are optional. -L <start> or -L <start>, spans from <start> to end of file. -L ,<end> spans from start of file to <end>. <start> and <end> can take one of these forms: number If <start> or <end> is a number, it specifies an absolute line number (lines count from 1). /regex/ This form will use the first line matching the given POSIX regex. If <start> is a regex, it will search from the end of the previous -L range, if any, otherwise from the start of file. If <start> is ^/regex/, it will search from the start of file. If <end> is a regex, it will search starting at the line given by <start>. +offset or -offset This is only valid for <end> and will specify a number of lines before or after the line given by <start>. If :<funcname> is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. :<funcname> searches from the end of the previous -L range, if any, otherwise from the start of file. ^:<funcname> searches from the start of file. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). -l Show long rev (Default: off). -t Show raw timestamp (Default: off). -S <revs-file> Use revisions from revs-file instead of calling git-rev-list(1). --reverse <rev>..<rev> Walk history forward instead of backward. Instead of showing the revision in which a line appeared, this shows the last revision in which a line has existed. This requires a range of revision like START..END where the path to blame exists in START. git blame --reverse START is taken as git blame --reverse START..HEAD for convenience. --first-parent Follow only the first parent commit upon seeing a merge commit. This option can be used to determine when a line was introduced to a particular integration branch, rather than when it was introduced to the history overall. -p, --porcelain Show in a format designed for machine consumption. --line-porcelain Show the porcelain format, but output commit information for each line, not just the first time a commit is referenced. Implies --porcelain. --incremental Show the result incrementally in a format designed for machine consumption. --encoding=<encoding> Specifies the encoding used to output author names and commit summaries. Setting it to none makes blame output unconverted data. For more information see the discussion about encoding in the git-log(1) manual page. --contents <file> Annotate using the contents from the named file, starting from <rev> if it is specified, and HEAD otherwise. You may specify - to make the command read from the standard input for the file contents. --date <format> Specifies the format used to output dates. If --date is not provided, the value of the blame.date config variable is used. If the blame.date config variable is also not set, the iso format is used. For supported values, see the discussion of the --date option at git-log(1). --[no-]progress Progress status is reported on the standard error stream by default when it is attached to a terminal. This flag enables progress reporting even if not attached to a terminal. Cant use --progress together with --porcelain or --incremental. -M[<num>] Detect moved or copied lines within a file. When a commit moves or copies a block of lines (e.g. the original file has A and then B, and the commit changes it to B and then A), the traditional blame algorithm notices only half of the movement and typically blames the lines that were moved up (i.e. B) to the parent and assigns blame to the lines that were moved down (i.e. A) to the child commit. With this option, both groups of lines are blamed on the parent by running extra passes of inspection. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying within a file for it to associate those lines with the parent commit. The default value is 20. -C[<num>] In addition to -M, detect lines moved or copied from other files that were modified in the same commit. This is useful when you reorganize your program and move code around across files. When this option is given twice, the command additionally looks for copies from other files in the commit that creates the file. When this option is given three times, the command additionally looks for copies from other files in any commit. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying between files for it to associate those lines with the parent commit. And the default value is 40. If there are more than one -C options given, the <num> argument of the last -C will take effect. --ignore-rev <rev> Ignore changes made by the revision when assigning blame, as if the change never happened. Lines that were changed or added by an ignored commit will be blamed on the previous commit that changed that line or nearby lines. This option may be specified multiple times to ignore more than one revision. If the blame.markIgnoredLines config option is set, then lines that were changed by an ignored commit and attributed to another commit will be marked with a ? in the blame output. If the blame.markUnblamableLines config option is set, then those lines touched by an ignored commit that we could not attribute to another revision are marked with a *. --ignore-revs-file <file> Ignore revisions listed in file, which must be in the same format as an fsck.skipList. This option may be repeated, and these files will be processed after any files specified with the blame.ignoreRevsFile config option. An empty file name, "", will clear the list of revs from previously processed files. --color-lines Color line annotations in the default format differently if they come from the same commit as the preceding line. This makes it easier to distinguish code blocks introduced by different commits. The color defaults to cyan and can be adjusted using the color.blame.repeatedLines config option. --color-by-age Color line annotations depending on the age of the line in the default format. The color.blame.highlightRecent config option controls what color is used for each range of age. -h Show help message. SEE ALSO top git-blame(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-ANNOTATE(1) Pages that refer to this page: git(1), git-blame(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git annotate\n\n> Show commit hash and last author on each line of a file.\n> See `git blame`, which is preferred over `git annotate`.\n> `git annotate` is provided for those familiar with other version control systems.\n> More information: <https://git-scm.com/docs/git-annotate>.\n\n- Print a file with the author name and commit hash prepended to each line:\n\n`git annotate {{path/to/file}}`\n\n- Print a file with the author [e]mail and commit hash prepended to each line:\n\n`git annotate -e {{path/to/file}}`\n\n- Print only rows that match a regular expression:\n\n`git annotate -L :{{regexp}} {{path/to/file}}`\n
git-apply
git-apply(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-apply(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFIGURATION | SUBMODULES | SEE ALSO | GIT | COLOPHON GIT-APPLY(1) Git Manual GIT-APPLY(1) NAME top git-apply - Apply a patch to files and/or to the index SYNOPSIS top git apply [--stat] [--numstat] [--summary] [--check] [--index | --intent-to-add] [--3way] [--apply] [--no-add] [--build-fake-ancestor=<file>] [-R | --reverse] [--allow-binary-replacement | --binary] [--reject] [-z] [-p<n>] [-C<n>] [--inaccurate-eof] [--recount] [--cached] [--ignore-space-change | --ignore-whitespace] [--whitespace=(nowarn|warn|fix|error|error-all)] [--exclude=<path>] [--include=<path>] [--directory=<root>] [--verbose | --quiet] [--unsafe-paths] [--allow-empty] [<patch>...] DESCRIPTION top Reads the supplied diff output (i.e. "a patch") and applies it to files. When running from a subdirectory in a repository, patched paths outside the directory are ignored. With the --index option, the patch is also applied to the index, and with the --cached option, the patch is only applied to the index. Without these options, the command applies the patch only to files, and does not require them to be in a Git repository. This command applies the patch but does not create a commit. Use git-am(1) to create commits from patches generated by git-format-patch(1) and/or received by email. OPTIONS top <patch>... The files to read the patch from. - can be used to read from the standard input. --stat Instead of applying the patch, output diffstat for the input. Turns off "apply". --numstat Similar to --stat, but shows the number of added and deleted lines in decimal notation and the pathname without abbreviation, to make it more machine friendly. For binary files, outputs two - instead of saying 0 0. Turns off "apply". --summary Instead of applying the patch, output a condensed summary of information obtained from git diff extended headers, such as creations, renames, and mode changes. Turns off "apply". --check Instead of applying the patch, see if the patch is applicable to the current working tree and/or the index file and detects errors. Turns off "apply". --index Apply the patch to both the index and the working tree (or merely check that it would apply cleanly to both if --check is in effect). Note that --index expects index entries and working tree copies for relevant paths to be identical (their contents and metadata such as file mode must match), and will raise an error if they are not, even if the patch would apply cleanly to both the index and the working tree in isolation. --cached Apply the patch to just the index, without touching the working tree. If --check is in effect, merely check that it would apply cleanly to the index entry. --intent-to-add When applying the patch only to the working tree, mark new files to be added to the index later (see --intent-to-add option in git-add(1)). This option is ignored unless running in a Git repository and --index is not specified. Note that --index could be implied by other options such as --cached or --3way. -3, --3way Attempt 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally, possibly leaving the conflict markers in the files in the working tree for the user to resolve. This option implies the --index option unless the --cached option is used, and is incompatible with the --reject option. When used with the --cached option, any conflicts are left at higher stages in the cache. --build-fake-ancestor=<file> Newer git diff output has embedded index information for each blob to help identify the original version that the patch applies to. When this flag is given, and if the original versions of the blobs are available locally, builds a temporary index containing those blobs. When a pure mode change is encountered (which has no index information), the information is read from the current index instead. -R, --reverse Apply the patch in reverse. --reject For atomicity, git apply by default fails the whole patch and does not touch the working tree when some of the hunks do not apply. This option makes it apply the parts of the patch that are applicable, and leave the rejected hunks in corresponding *.rej files. -z When --numstat has been given, do not munge pathnames, but use a NUL-terminated machine-readable format. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). -p<n> Remove <n> leading path components (separated by slashes) from traditional diff paths. E.g., with -p2, a patch against a/dir/file will be applied directly to file. The default is 1. -C<n> Ensure at least <n> lines of surrounding context match before and after each change. When fewer lines of surrounding context exist they all must match. By default no context is ever ignored. --unidiff-zero By default, git apply expects that the patch being applied is a unified diff with at least one line of context. This provides good safety measures, but breaks down when applying a diff generated with --unified=0. To bypass these checks use --unidiff-zero. Note, for the reasons stated above, the usage of context-free patches is discouraged. --apply If you use any of the options marked "Turns off apply" above, git apply reads and outputs the requested information without actually applying the patch. Give this flag after those flags to also apply the patch. --no-add When applying a patch, ignore additions made by the patch. This can be used to extract the common part between two files by first running diff on them and applying the result with this option, which would apply the deletion part but not the addition part. --allow-binary-replacement, --binary Historically we did not allow binary patch application without an explicit permission from the user, and this flag was the way to do so. Currently, we always allow binary patch application, so this is a no-op. --exclude=<path-pattern> Dont apply changes to files matching the given path pattern. This can be useful when importing patchsets, where you want to exclude certain files or directories. --include=<path-pattern> Apply changes to files matching the given path pattern. This can be useful when importing patchsets, where you want to include certain files or directories. When --exclude and --include patterns are used, they are examined in the order they appear on the command line, and the first match determines if a patch to each path is used. A patch to a path that does not match any include/exclude pattern is used by default if there is no include pattern on the command line, and ignored if there is any include pattern. --ignore-space-change, --ignore-whitespace When applying a patch, ignore changes in whitespace in context lines if necessary. Context lines will preserve their whitespace, and they will not undergo whitespace fixing regardless of the value of the --whitespace option. New lines will still be fixed, though. --whitespace=<action> When applying a patch, detect a new or modified line that has whitespace errors. What are considered whitespace errors is controlled by core.whitespace configuration. By default, trailing whitespaces (including lines that solely consist of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. By default, the command outputs warning messages but applies the patch. When git-apply is used for statistics and not applying a patch, it defaults to nowarn. You can use different <action> values to control this behavior: nowarn turns off the trailing whitespace warning. warn outputs warnings for a few such errors, but applies the patch as-is (default). fix outputs warnings for a few such errors, and applies the patch after fixing them (strip is a synonym the tool used to consider only trailing whitespace characters as errors, and the fix involved stripping them, but modern Gits do more). error outputs warnings for a few such errors, and refuses to apply the patch. error-all is similar to error but shows all errors. --inaccurate-eof Under certain circumstances, some versions of diff do not correctly detect a missing new-line at the end of the file. As a result, patches created by such diff programs do not record incomplete lines correctly. This option adds support for applying such patches by working around this bug. -v, --verbose Report progress to stderr. By default, only a message about the current patch being applied will be printed. This option will cause additional information to be reported. -q, --quiet Suppress stderr output. Messages about patch status and progress will not be printed. --recount Do not trust the line counts in the hunk headers, but infer them by inspecting the patch (e.g. after editing the patch without adjusting the hunk headers appropriately). --directory=<root> Prepend <root> to all filenames. If a "-p" argument was also passed, it is applied before prepending the new root. For example, a patch that talks about updating a/git-gui.sh to b/git-gui.sh can be applied to the file in the working tree modules/git-gui/git-gui.sh by running git apply --directory=modules/git-gui. --unsafe-paths By default, a patch that affects outside the working area (either a Git controlled working tree, or the current working directory when "git apply" is used as a replacement of GNU patch) is rejected as a mistake (or a mischief). When git apply is used as a "better GNU patch", the user can pass the --unsafe-paths option to override this safety check. This option has no effect when --index or --cached is in use. --allow-empty Dont return an error for patches containing no diff. This includes empty patches and patches with commit text only. CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: apply.ignoreWhitespace When set to change, tells git apply to ignore changes in whitespace, in the same way as the --ignore-space-change option. When set to one of: no, none, never, false, it tells git apply to respect all whitespace differences. See git-apply(1). apply.whitespace Tells git apply how to handle whitespace, in the same way as the --whitespace option. See git-apply(1). SUBMODULES top If the patch contains any changes to submodules then git apply treats these changes as follows. If --index is specified (explicitly or implicitly), then the submodule commits must match the index exactly for the patch to apply. If any of the submodules are checked-out, then these check-outs are completely ignored, i.e., they are not required to be up to date or clean and they are not updated. If --index is not specified, then the submodule commits in the patch are ignored and only the absence or presence of the corresponding subdirectory is checked and (if possible) updated. SEE ALSO top git-am(1). GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-APPLY(1) Pages that refer to this page: git(1), git-am(1), git-apply(1), git-config(1), git-diff(1), git-range-diff(1), git-rebase(1), git-stripspace(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git apply\n\n> Apply a patch to files and/or to the index without creating a commit.\n> See also `git am`, which applies a patch and also creates a commit.\n> More information: <https://git-scm.com/docs/git-apply>.\n\n- Print messages about the patched files:\n\n`git apply --verbose {{path/to/file}}`\n\n- Apply and add the patched files to the index:\n\n`git apply --index {{path/to/file}}`\n\n- Apply a remote patch file:\n\n`curl -L {{https://example.com/file.patch}} | git apply`\n\n- Output diffstat for the input and apply the patch:\n\n`git apply --stat --apply {{path/to/file}}`\n\n- Apply the patch in reverse:\n\n`git apply --reverse {{path/to/file}}`\n\n- Store the patch result in the index without modifying the working tree:\n\n`git apply --cache {{path/to/file}}`\n
git-archive
git-archive(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-archive(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | BACKEND EXTRA OPTIONS | CONFIGURATION | ATTRIBUTES | EXAMPLES | SEE ALSO | GIT | COLOPHON GIT-ARCHIVE(1) Git Manual GIT-ARCHIVE(1) NAME top git-archive - Create an archive of files from a named tree SYNOPSIS top git archive [--format=<fmt>] [--list] [--prefix=<prefix>/] [<extra>] [-o <file> | --output=<file>] [--worktree-attributes] [--remote=<repo> [--exec=<git-upload-archive>]] <tree-ish> [<path>...] DESCRIPTION top Creates an archive of the specified format containing the tree structure for the named tree, and writes it out to the standard output. If <prefix> is specified it is prepended to the filenames in the archive. git archive behaves differently when given a tree ID as opposed to a commit ID or tag ID. When a tree ID is provided, the current time is used as the modification time of each file in the archive. On the other hand, when a commit ID or tag ID is provided, the commit time as recorded in the referenced commit object is used instead. Additionally the commit ID is stored in a global extended pax header if the tar format is used; it can be extracted using git get-tar-commit-id. In ZIP files it is stored as a file comment. OPTIONS top --format=<fmt> Format of the resulting archive. Possible values are tar, zip, tar.gz, tgz, and any format defined using the configuration option tar.<format>.command. If --format is not given, and the output file is specified, the format is inferred from the filename if possible (e.g. writing to foo.zip makes the output to be in the zip format). Otherwise the output format is tar. -l, --list Show all available formats. -v, --verbose Report progress to stderr. --prefix=<prefix>/ Prepend <prefix>/ to paths in the archive. Can be repeated; its rightmost value is used for all tracked files. See below which value gets used by --add-file and --add-virtual-file. -o <file>, --output=<file> Write the archive to <file> instead of stdout. --add-file=<file> Add a non-tracked file to the archive. Can be repeated to add multiple files. The path of the file in the archive is built by concatenating the value of the last --prefix option (if any) before this --add-file and the basename of <file>. --add-virtual-file=<path>:<content> Add the specified contents to the archive. Can be repeated to add multiple files. The path of the file in the archive is built by concatenating the value of the last --prefix option (if any) before this --add-virtual-file and <path>. The <path> argument can start and end with a literal double-quote character; the contained file name is interpreted as a C-style string, i.e. the backslash is interpreted as escape character. The path must be quoted if it contains a colon, to avoid the colon from being misinterpreted as the separator between the path and the contents, or if the path begins or ends with a double-quote character. The file mode is limited to a regular file, and the option may be subject to platform-dependent command-line limits. For non-trivial cases, write an untracked file and use --add-file instead. --worktree-attributes Look for attributes in .gitattributes files in the working tree as well (see the section called ATTRIBUTES). --mtime=<time> Set modification time of archive entries. Without this option the committer time is used if <tree-ish> is a commit or tag, and the current time if it is a tree. <extra> This can be any options that the archiver backend understands. See next section. --remote=<repo> Instead of making a tar archive from the local repository, retrieve a tar archive from a remote repository. Note that the remote repository may place restrictions on which sha1 expressions may be allowed in <tree-ish>. See git-upload-archive(1) for details. --exec=<git-upload-archive> Used with --remote to specify the path to the git-upload-archive on the remote side. <tree-ish> The tree or commit to produce an archive for. <path> Without an optional path parameter, all files and subdirectories of the current working directory are included in the archive. If one or more paths are specified, only these are included. BACKEND EXTRA OPTIONS top zip -<digit> Specify compression level. Larger values allow the command to spend more time to compress to smaller size. Supported values are from -0 (store-only) to -9 (best ratio). Default is -6 if not given. tar -<number> Specify compression level. The value will be passed to the compression command configured in tar.<format>.command. See manual page of the configured command for the list of supported levels and the default level if this option isnt specified. CONFIGURATION top tar.umask This variable can be used to restrict the permission bits of tar archive entries. The default is 0002, which turns off the world write bit. The special value "user" indicates that the archiving users umask will be used instead. See umask(2) for details. If --remote is used then only the configuration of the remote repository takes effect. tar.<format>.command This variable specifies a shell command through which the tar output generated by git archive should be piped. The command is executed using the shell with the generated tar file on its standard input, and should produce the final output on its standard output. Any compression-level options will be passed to the command (e.g., -9). The tar.gz and tgz formats are defined automatically and use the magic command git archive gzip by default, which invokes an internal implementation of gzip. tar.<format>.remote If true, enable the format for use by remote clients via git-upload-archive(1). Defaults to false for user-defined formats, but true for the tar.gz and tgz formats. ATTRIBUTES top export-ignore Files and directories with the attribute export-ignore wont be added to archive files. See gitattributes(5) for details. export-subst If the attribute export-subst is set for a file then Git will expand several placeholders when adding this file to an archive. See gitattributes(5) for details. Note that attributes are by default taken from the .gitattributes files in the tree that is being archived. If you want to tweak the way the output is generated after the fact (e.g. you committed without adding an appropriate export-ignore in its .gitattributes), adjust the checked out .gitattributes file as necessary and use --worktree-attributes option. Alternatively you can keep necessary attributes that should apply while archiving any tree in your $GIT_DIR/info/attributes file. EXAMPLES top git archive --format=tar --prefix=junk/ HEAD | (cd /var/tmp/ && tar xf -) Create a tar archive that contains the contents of the latest commit on the current branch, and extract it in the /var/tmp/junk directory. git archive --format=tar --prefix=git-1.4.0/ v1.4.0 | gzip >git-1.4.0.tar.gz Create a compressed tarball for v1.4.0 release. git archive --format=tar.gz --prefix=git-1.4.0/ v1.4.0 >git-1.4.0.tar.gz Same as above, but using the builtin tar.gz handling. git archive --prefix=git-1.4.0/ -o git-1.4.0.tar.gz v1.4.0 Same as above, but the format is inferred from the output file. git archive --format=tar --prefix=git-1.4.0/ v1.4.0^{tree} | gzip >git-1.4.0.tar.gz Create a compressed tarball for v1.4.0 release, but without a global extended pax header. git archive --format=zip --prefix=git-docs/ HEAD:Documentation/ > git-1.4.0-docs.zip Put everything in the current heads Documentation/ directory into git-1.4.0-docs.zip, with the prefix git-docs/. git archive -o latest.zip HEAD Create a Zip archive that contains the contents of the latest commit on the current branch. Note that the output format is inferred by the extension of the output file. git archive -o latest.tar --prefix=build/ --add-file=configure --prefix= HEAD Creates a tar archive that contains the contents of the latest commit on the current branch with no prefix and the untracked file configure with the prefix build/. git config tar.tar.xz.command "xz -c" Configure a "tar.xz" format for making LZMA-compressed tarfiles. You can use it specifying --format=tar.xz, or by creating an output file like -o foo.tar.xz. SEE ALSO top gitattributes(5) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-ARCHIVE(1) Pages that refer to this page: git(1), git-config(1), gitattributes(5), gitweb.conf(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git archive\n\n> Create an archive of files from a tree.\n> More information: <https://git-scm.com/docs/git-archive>.\n\n- Create a tar archive from the contents of the current HEAD and print it to `stdout`:\n\n`git archive --verbose HEAD`\n\n- Use the Zip format and report progress [v]erbosely:\n\n`git archive {{-v|--verbose}} --format zip HEAD`\n\n- [o]utput the Zip archive to a specific file:\n\n`git archive -v {{-o|--output}} {{path/to/file.zip}} HEAD`\n\n- Create a tar archive from the contents of the latest commit of a specific branch:\n\n`git archive -o {{path/to/file.tar}} {{branch_name}}`\n\n- Use the contents of a specific directory:\n\n`git archive -o {{path/to/file.tar}} HEAD:{{path/to/directory}}`\n\n- Prepend a path to each file to archive it inside a specific directory:\n\n`git archive -o {{path/to/file.tar}} --prefix {{path/to/prepend}}/ HEAD`\n
git-bisect
git-bisect(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-bisect(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | SEE ALSO | GIT | NOTES | COLOPHON GIT-BISECT(1) Git Manual GIT-BISECT(1) NAME top git-bisect - Use binary search to find the commit that introduced a bug SYNOPSIS top git bisect <subcommand> <options> DESCRIPTION top The command takes various subcommands, and different options depending on the subcommand: git bisect start [--term-(new|bad)=<term-new> --term-(old|good)=<term-old>] [--no-checkout] [--first-parent] [<bad> [<good>...]] [--] [<paths>...] git bisect (bad|new|<term-new>) [<rev>] git bisect (good|old|<term-old>) [<rev>...] git bisect terms [--term-good | --term-bad] git bisect skip [(<rev>|<range>)...] git bisect reset [<commit>] git bisect (visualize|view) git bisect replay <logfile> git bisect log git bisect run <cmd> [<arg>...] git bisect help This command uses a binary search algorithm to find which commit in your projects history introduced a bug. You use it by first telling it a "bad" commit that is known to contain the bug, and a "good" commit that is known to be before the bug was introduced. Then git bisect picks a commit between those two endpoints and asks you whether the selected commit is "good" or "bad". It continues narrowing down the range until it finds the exact commit that introduced the change. In fact, git bisect can be used to find the commit that changed any property of your project; e.g., the commit that fixed a bug, or the commit that caused a benchmarks performance to improve. To support this more general usage, the terms "old" and "new" can be used in place of "good" and "bad", or you can choose your own terms. See section "Alternate terms" below for more information. Basic bisect commands: start, bad, good As an example, suppose you are trying to find the commit that broke a feature that was known to work in version v2.6.13-rc2 of your project. You start a bisect session as follows: $ git bisect start $ git bisect bad # Current version is bad $ git bisect good v2.6.13-rc2 # v2.6.13-rc2 is known to be good Once you have specified at least one bad and one good commit, git bisect selects a commit in the middle of that range of history, checks it out, and outputs something similar to the following: Bisecting: 675 revisions left to test after this (roughly 10 steps) You should now compile the checked-out version and test it. If that version works correctly, type $ git bisect good If that version is broken, type $ git bisect bad Then git bisect will respond with something like Bisecting: 337 revisions left to test after this (roughly 9 steps) Keep repeating the process: compile the tree, test it, and depending on whether it is good or bad run git bisect good or git bisect bad to ask for the next commit that needs testing. Eventually there will be no more revisions left to inspect, and the command will print out a description of the first bad commit. The reference refs/bisect/bad will be left pointing at that commit. Bisect reset After a bisect session, to clean up the bisection state and return to the original HEAD, issue the following command: $ git bisect reset By default, this will return your tree to the commit that was checked out before git bisect start. (A new git bisect start will also do that, as it cleans up the old bisection state.) With an optional argument, you can return to a different commit instead: $ git bisect reset <commit> For example, git bisect reset bisect/bad will check out the first bad revision, while git bisect reset HEAD will leave you on the current bisection commit and avoid switching commits at all. Alternate terms Sometimes you are not looking for the commit that introduced a breakage, but rather for a commit that caused a change between some other "old" state and "new" state. For example, you might be looking for the commit that introduced a particular fix. Or you might be looking for the first commit in which the source-code filenames were finally all converted to your companys naming standard. Or whatever. In such cases it can be very confusing to use the terms "good" and "bad" to refer to "the state before the change" and "the state after the change". So instead, you can use the terms "old" and "new", respectively, in place of "good" and "bad". (But note that you cannot mix "good" and "bad" with "old" and "new" in a single session.) In this more general usage, you provide git bisect with a "new" commit that has some property and an "old" commit that doesnt have that property. Each time git bisect checks out a commit, you test if that commit has the property. If it does, mark the commit as "new"; otherwise, mark it as "old". When the bisection is done, git bisect will report which commit introduced the property. To use "old" and "new" instead of "good" and bad, you must run git bisect start without commits as argument and then run the following commands to add the commits: git bisect old [<rev>] to indicate that a commit was before the sought change, or git bisect new [<rev>...] to indicate that it was after. To get a reminder of the currently used terms, use git bisect terms You can get just the old (respectively new) term with git bisect terms --term-old or git bisect terms --term-good. If you would like to use your own terms instead of "bad"/"good" or "new"/"old", you can choose any names you like (except existing bisect subcommands like reset, start, ...) by starting the bisection using git bisect start --term-old <term-old> --term-new <term-new> For example, if you are looking for a commit that introduced a performance regression, you might use git bisect start --term-old fast --term-new slow Or if you are looking for the commit that fixed a bug, you might use git bisect start --term-new fixed --term-old broken Then, use git bisect <term-old> and git bisect <term-new> instead of git bisect good and git bisect bad to mark commits. Bisect visualize/view To see the currently remaining suspects in gitk, issue the following command during the bisection process (the subcommand view can be used as an alternative to visualize): $ git bisect visualize Git detects a graphical environment through various environment variables: DISPLAY, which is set in X Window System environments on Unix systems. SESSIONNAME, which is set under Cygwin in interactive desktop sessions. MSYSTEM, which is set under Msys2 and Git for Windows. SECURITYSESSIONID, which may be set on macOS in interactive desktop sessions. If none of these environment variables is set, git log is used instead. You can also give command-line options such as -p and --stat. $ git bisect visualize --stat Bisect log and bisect replay After having marked revisions as good or bad, issue the following command to show what has been done so far: $ git bisect log If you discover that you made a mistake in specifying the status of a revision, you can save the output of this command to a file, edit it to remove the incorrect entries, and then issue the following commands to return to a corrected state: $ git bisect reset $ git bisect replay that-file Avoiding testing a commit If, in the middle of a bisect session, you know that the suggested revision is not a good one to test (e.g. it fails to build and you know that the failure does not have anything to do with the bug you are chasing), you can manually select a nearby commit and test that one instead. For example: $ git bisect good/bad # previous round was good or bad. Bisecting: 337 revisions left to test after this (roughly 9 steps) $ git bisect visualize # oops, that is uninteresting. $ git reset --hard HEAD~3 # try 3 revisions before what # was suggested Then compile and test the chosen revision, and afterwards mark the revision as good or bad in the usual manner. Bisect skip Instead of choosing a nearby commit by yourself, you can ask Git to do it for you by issuing the command: $ git bisect skip # Current version cannot be tested However, if you skip a commit adjacent to the one you are looking for, Git will be unable to tell exactly which of those commits was the first bad one. You can also skip a range of commits, instead of just one commit, using range notation. For example: $ git bisect skip v2.5..v2.6 This tells the bisect process that no commit after v2.5, up to and including v2.6, should be tested. Note that if you also want to skip the first commit of the range you would issue the command: $ git bisect skip v2.5 v2.5..v2.6 This tells the bisect process that the commits between v2.5 and v2.6 (inclusive) should be skipped. Cutting down bisection by giving more parameters to bisect start You can further cut down the number of trials, if you know what part of the tree is involved in the problem you are tracking down, by specifying path parameters when issuing the bisect start command: $ git bisect start -- arch/i386 include/asm-i386 If you know beforehand more than one good commit, you can narrow the bisect space down by specifying all of the good commits immediately after the bad commit when issuing the bisect start command: $ git bisect start v2.6.20-rc6 v2.6.20-rc4 v2.6.20-rc1 -- # v2.6.20-rc6 is bad # v2.6.20-rc4 and v2.6.20-rc1 are good Bisect run If you have a script that can tell if the current source code is good or bad, you can bisect by issuing the command: $ git bisect run my_script arguments Note that the script (my_script in the above example) should exit with code 0 if the current source code is good/old, and exit with a code between 1 and 127 (inclusive), except 125, if the current source code is bad/new. Any other exit code will abort the bisect process. It should be noted that a program that terminates via exit(-1) leaves $? = 255, (see the exit(3) manual page), as the value is chopped with & 0377. The special exit code 125 should be used when the current source code cannot be tested. If the script exits with this code, the current revision will be skipped (see git bisect skip above). 125 was chosen as the highest sensible value to use for this purpose, because 126 and 127 are used by POSIX shells to signal specific error status (127 is for command not found, 126 is for command found but not executablethese details do not matter, as they are normal errors in the script, as far as bisect run is concerned). You may often find that during a bisect session you want to have temporary modifications (e.g. s/#define DEBUG 0/#define DEBUG 1/ in a header file, or "revision that does not have this commit needs this patch applied to work around another problem this bisection is not interested in") applied to the revision being tested. To cope with such a situation, after the inner git bisect finds the next revision to test, the script can apply the patch before compiling, run the real test, and afterwards decide if the revision (possibly with the needed patch) passed the test and then rewind the tree to the pristine state. Finally the script should exit with the status of the real test to let the git bisect run command loop determine the eventual outcome of the bisect session. OPTIONS top --no-checkout Do not checkout the new working tree at each iteration of the bisection process. Instead just update a special reference named BISECT_HEAD to make it point to the commit that should be tested. This option may be useful when the test you would perform in each step does not require a checked out tree. If the repository is bare, --no-checkout is assumed. --first-parent Follow only the first parent commit upon seeing a merge commit. In detecting regressions introduced through the merging of a branch, the merge commit will be identified as introduction of the bug and its ancestors will be ignored. This option is particularly useful in avoiding false positives when a merged branch contained broken or non-buildable commits, but the merge itself was OK. EXAMPLES top Automatically bisect a broken build between v1.2 and HEAD: $ git bisect start HEAD v1.2 -- # HEAD is bad, v1.2 is good $ git bisect run make # "make" builds the app $ git bisect reset # quit the bisect session Automatically bisect a test failure between origin and HEAD: $ git bisect start HEAD origin -- # HEAD is bad, origin is good $ git bisect run make test # "make test" builds and tests $ git bisect reset # quit the bisect session Automatically bisect a broken test case: $ cat ~/test.sh #!/bin/sh make || exit 125 # this skips broken builds ~/check_test_case.sh # does the test case pass? $ git bisect start HEAD HEAD~10 -- # culprit is among the last 10 $ git bisect run ~/test.sh $ git bisect reset # quit the bisect session Here we use a test.sh custom script. In this script, if make fails, we skip the current commit. check_test_case.sh should exit 0 if the test case passes, and exit 1 otherwise. It is safer if both test.sh and check_test_case.sh are outside the repository to prevent interactions between the bisect, make and test processes and the scripts. Automatically bisect with temporary modifications (hot-fix): $ cat ~/test.sh #!/bin/sh # tweak the working tree by merging the hot-fix branch # and then attempt a build if git merge --no-commit --no-ff hot-fix && make then # run project specific test and report its status ~/check_test_case.sh status=$? else # tell the caller this is untestable status=125 fi # undo the tweak to allow clean flipping to the next commit git reset --hard # return control exit $status This applies modifications from a hot-fix branch before each test run, e.g. in case your build or test environment changed so that older revisions may need a fix which newer ones have already. (Make sure the hot-fix branch is based off a commit which is contained in all revisions which you are bisecting, so that the merge does not pull in too much, or use git cherry-pick instead of git merge.) Automatically bisect a broken test case: $ git bisect start HEAD HEAD~10 -- # culprit is among the last 10 $ git bisect run sh -c "make || exit 125; ~/check_test_case.sh" $ git bisect reset # quit the bisect session This shows that you can do without a run script if you write the test on a single line. Locate a good region of the object graph in a damaged repository $ git bisect start HEAD <known-good-commit> [ <boundary-commit> ... ] --no-checkout $ git bisect run sh -c ' GOOD=$(git for-each-ref "--format=%(objectname)" refs/bisect/good-*) && git rev-list --objects BISECT_HEAD --not $GOOD >tmp.$$ && git pack-objects --stdout >/dev/null <tmp.$$ rc=$? rm -f tmp.$$ test $rc = 0' $ git bisect reset # quit the bisect session In this case, when git bisect run finishes, bisect/bad will refer to a commit that has at least one parent whose reachable graph is fully traversable in the sense required by git pack objects. Look for a fix instead of a regression in the code $ git bisect start $ git bisect new HEAD # current commit is marked as new $ git bisect old HEAD~10 # the tenth commit from now is marked as old or: $ git bisect start --term-old broken --term-new fixed $ git bisect fixed $ git bisect broken HEAD~10 Getting help Use git bisect to get a short usage description, and git bisect help or git bisect -h to get a long usage description. SEE ALSO top Fighting regressions with git bisect[1], git-blame(1). GIT top Part of the git(1) suite NOTES top 1. Fighting regressions with git bisect file:///home/mtk/share/doc/git-doc/git-bisect-lk2009.html COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-BISECT(1) Pages that refer to this page: git(1), gittutorial(7), gitworkflows(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git bisect\n\n> Use binary search to find the commit that introduced a bug.\n> Git automatically jumps back and forth in the commit graph to progressively narrow down the faulty commit.\n> More information: <https://git-scm.com/docs/git-bisect>.\n\n- Start a bisect session on a commit range bounded by a known buggy commit, and a known clean (typically older) one:\n\n`git bisect start {{bad_commit}} {{good_commit}}`\n\n- For each commit that `git bisect` selects, mark it as "bad" or "good" after testing it for the issue:\n\n`git bisect {{good|bad}}`\n\n- After `git bisect` pinpoints the faulty commit, end the bisect session and return to the previous branch:\n\n`git bisect reset`\n\n- Skip a commit during a bisect (e.g. one that fails the tests due to a different issue):\n\n`git bisect skip`\n\n- Display a log of what has been done so far:\n\n`git bisect log`\n
git-blame
git-blame(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-blame(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | THE DEFAULT FORMAT | THE PORCELAIN FORMAT | SPECIFYING RANGES | INCREMENTAL OUTPUT | MAPPING AUTHORS | CONFIGURATION | SEE ALSO | GIT | COLOPHON GIT-BLAME(1) Git Manual GIT-BLAME(1) NAME top git-blame - Show what revision and author last modified each line of a file SYNOPSIS top git blame [-c] [-b] [-l] [--root] [-t] [-f] [-n] [-s] [-e] [-p] [-w] [--incremental] [-L <range>] [-S <revs-file>] [-M] [-C] [-C] [-C] [--since=<date>] [--ignore-rev <rev>] [--ignore-revs-file <file>] [--color-lines] [--color-by-age] [--progress] [--abbrev=<n>] [ --contents <file> ] [<rev> | --reverse <rev>..<rev>] [--] <file> DESCRIPTION top Annotates each line in the given file with information from the revision which last modified the line. Optionally, start annotating from the given revision. When specified one or more times, -L restricts annotation to the requested lines. The origin of lines is automatically followed across whole-file renames (currently there is no option to turn the rename-following off). To follow lines moved from one file to another, or to follow lines that were copied and pasted from another file, etc., see the -C and -M options. The report does not tell you anything about lines which have been deleted or replaced; you need to use a tool such as git diff or the "pickaxe" interface briefly mentioned in the following paragraph. Apart from supporting file annotation, Git also supports searching the development history for when a code snippet occurred in a change. This makes it possible to track when a code snippet was added to a file, moved or copied between files, and eventually deleted or replaced. It works by searching for a text string in the diff. A small example of the pickaxe interface that searches for blame_usage: $ git log --pretty=oneline -S'blame_usage' 5040f17eba15504bad66b14a645bddd9b015ebb7 blame -S <ancestry-file> ea4c7f9bf69e781dd0cd88d2bccb2bf5cc15c9a7 git-blame: Make the output OPTIONS top -b Show blank SHA-1 for boundary commits. This can also be controlled via the blame.blankBoundary config option. --root Do not treat root commits as boundaries. This can also be controlled via the blame.showRoot config option. --show-stats Include additional statistics at the end of blame output. -L <start>,<end>, -L :<funcname> Annotate only the line range given by <start>,<end>, or by the function name regex <funcname>. May be specified multiple times. Overlapping ranges are allowed. <start> and <end> are optional. -L <start> or -L <start>, spans from <start> to end of file. -L ,<end> spans from start of file to <end>. <start> and <end> can take one of these forms: number If <start> or <end> is a number, it specifies an absolute line number (lines count from 1). /regex/ This form will use the first line matching the given POSIX regex. If <start> is a regex, it will search from the end of the previous -L range, if any, otherwise from the start of file. If <start> is ^/regex/, it will search from the start of file. If <end> is a regex, it will search starting at the line given by <start>. +offset or -offset This is only valid for <end> and will specify a number of lines before or after the line given by <start>. If :<funcname> is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. :<funcname> searches from the end of the previous -L range, if any, otherwise from the start of file. ^:<funcname> searches from the start of file. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). -l Show long rev (Default: off). -t Show raw timestamp (Default: off). -S <revs-file> Use revisions from revs-file instead of calling git-rev-list(1). --reverse <rev>..<rev> Walk history forward instead of backward. Instead of showing the revision in which a line appeared, this shows the last revision in which a line has existed. This requires a range of revision like START..END where the path to blame exists in START. git blame --reverse START is taken as git blame --reverse START..HEAD for convenience. --first-parent Follow only the first parent commit upon seeing a merge commit. This option can be used to determine when a line was introduced to a particular integration branch, rather than when it was introduced to the history overall. -p, --porcelain Show in a format designed for machine consumption. --line-porcelain Show the porcelain format, but output commit information for each line, not just the first time a commit is referenced. Implies --porcelain. --incremental Show the result incrementally in a format designed for machine consumption. --encoding=<encoding> Specifies the encoding used to output author names and commit summaries. Setting it to none makes blame output unconverted data. For more information see the discussion about encoding in the git-log(1) manual page. --contents <file> Annotate using the contents from the named file, starting from <rev> if it is specified, and HEAD otherwise. You may specify - to make the command read from the standard input for the file contents. --date <format> Specifies the format used to output dates. If --date is not provided, the value of the blame.date config variable is used. If the blame.date config variable is also not set, the iso format is used. For supported values, see the discussion of the --date option at git-log(1). --[no-]progress Progress status is reported on the standard error stream by default when it is attached to a terminal. This flag enables progress reporting even if not attached to a terminal. Cant use --progress together with --porcelain or --incremental. -M[<num>] Detect moved or copied lines within a file. When a commit moves or copies a block of lines (e.g. the original file has A and then B, and the commit changes it to B and then A), the traditional blame algorithm notices only half of the movement and typically blames the lines that were moved up (i.e. B) to the parent and assigns blame to the lines that were moved down (i.e. A) to the child commit. With this option, both groups of lines are blamed on the parent by running extra passes of inspection. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying within a file for it to associate those lines with the parent commit. The default value is 20. -C[<num>] In addition to -M, detect lines moved or copied from other files that were modified in the same commit. This is useful when you reorganize your program and move code around across files. When this option is given twice, the command additionally looks for copies from other files in the commit that creates the file. When this option is given three times, the command additionally looks for copies from other files in any commit. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying between files for it to associate those lines with the parent commit. And the default value is 40. If there are more than one -C options given, the <num> argument of the last -C will take effect. --ignore-rev <rev> Ignore changes made by the revision when assigning blame, as if the change never happened. Lines that were changed or added by an ignored commit will be blamed on the previous commit that changed that line or nearby lines. This option may be specified multiple times to ignore more than one revision. If the blame.markIgnoredLines config option is set, then lines that were changed by an ignored commit and attributed to another commit will be marked with a ? in the blame output. If the blame.markUnblamableLines config option is set, then those lines touched by an ignored commit that we could not attribute to another revision are marked with a *. --ignore-revs-file <file> Ignore revisions listed in file, which must be in the same format as an fsck.skipList. This option may be repeated, and these files will be processed after any files specified with the blame.ignoreRevsFile config option. An empty file name, "", will clear the list of revs from previously processed files. --color-lines Color line annotations in the default format differently if they come from the same commit as the preceding line. This makes it easier to distinguish code blocks introduced by different commits. The color defaults to cyan and can be adjusted using the color.blame.repeatedLines config option. --color-by-age Color line annotations depending on the age of the line in the default format. The color.blame.highlightRecent config option controls what color is used for each range of age. -h Show help message. -c Use the same output mode as git-annotate(1) (Default: off). --score-debug Include debugging information related to the movement of lines between files (see -C) and lines moved within a file (see -M). The first number listed is the score. This is the number of alphanumeric characters detected as having been moved between or within files. This must be above a certain threshold for git blame to consider those lines of code to have been moved. -f, --show-name Show the filename in the original commit. By default the filename is shown if there is any line that came from a file with a different name, due to rename detection. -n, --show-number Show the line number in the original commit (Default: off). -s Suppress the author name and timestamp from the output. -e, --show-email Show the author email instead of the author name (Default: off). This can also be controlled via the blame.showEmail config option. -w Ignore whitespace when comparing the parents version and the childs to find where the lines came from. --abbrev=<n> Instead of using the default 7+1 hexadecimal digits as the abbreviated object name, use <m>+1 digits, where <m> is at least <n> but ensures the commit object names are unique. Note that 1 column is used for a caret to mark the boundary commit. THE DEFAULT FORMAT top When neither --porcelain nor --incremental option is specified, git blame will output annotation for each line with: abbreviated object name for the commit the line came from; author ident (by default the author name and date, unless -s or -e is specified); and line number before the line contents. THE PORCELAIN FORMAT top In this format, each line is output after a header; the header at the minimum has the first line which has: 40-byte SHA-1 of the commit the line is attributed to; the line number of the line in the original file; the line number of the line in the final file; on a line that starts a group of lines from a different commit than the previous one, the number of lines in this group. On subsequent lines this field is absent. This header line is followed by the following information at least once for each commit: the author name ("author"), email ("author-mail"), time ("author-time"), and time zone ("author-tz"); similarly for committer. the filename in the commit that the line is attributed to. the first line of the commit log message ("summary"). The contents of the actual line are output after the above header, prefixed by a TAB. This is to allow adding more header elements later. The porcelain format generally suppresses commit information that has already been seen. For example, two lines that are blamed to the same commit will both be shown, but the details for that commit will be shown only once. This is more efficient, but may require more state be kept by the reader. The --line-porcelain option can be used to output full commit information for each line, allowing simpler (but less efficient) usage like: # count the number of lines attributed to each author git blame --line-porcelain file | sed -n 's/^author //p' | sort | uniq -c | sort -rn SPECIFYING RANGES top Unlike git blame and git annotate in older versions of git, the extent of the annotation can be limited to both line ranges and revision ranges. The -L option, which limits annotation to a range of lines, may be specified multiple times. When you are interested in finding the origin for lines 40-60 for file foo, you can use the -L option like so (they mean the same thing both ask for 21 lines starting at line 40): git blame -L 40,60 foo git blame -L 40,+21 foo Also you can use a regular expression to specify the line range: git blame -L '/^sub hello {/,/^}$/' foo which limits the annotation to the body of the hello subroutine. When you are not interested in changes older than version v2.6.18, or changes older than 3 weeks, you can use revision range specifiers similar to git rev-list: git blame v2.6.18.. -- foo git blame --since=3.weeks -- foo When revision range specifiers are used to limit the annotation, lines that have not changed since the range boundary (either the commit v2.6.18 or the most recent commit that is more than 3 weeks old in the above example) are blamed for that range boundary commit. A particularly useful way is to see if an added file has lines created by copy-and-paste from existing files. Sometimes this indicates that the developer was being sloppy and did not refactor the code properly. You can first find the commit that introduced the file with: git log --diff-filter=A --pretty=short -- foo and then annotate the change between the commit and its parents, using commit^! notation: git blame -C -C -f $commit^! -- foo INCREMENTAL OUTPUT top When called with --incremental option, the command outputs the result as it is built. The output generally will talk about lines touched by more recent commits first (i.e. the lines will be annotated out of order) and is meant to be used by interactive viewers. The output format is similar to the Porcelain format, but it does not contain the actual lines from the file that is being annotated. 1. Each blame entry always starts with a line of: <40-byte hex sha1> <sourceline> <resultline> <num_lines> Line numbers count from 1. 2. The first time that a commit shows up in the stream, it has various other information about it printed out with a one-word tag at the beginning of each line describing the extra commit information (author, email, committer, dates, summary, etc.). 3. Unlike the Porcelain format, the filename information is always given and terminates the entry: "filename" <whitespace-quoted-filename-goes-here> and thus it is really quite easy to parse for some line- and word-oriented parser (which should be quite natural for most scripting languages). Note For people who do parsing: to make it more robust, just ignore any lines between the first and last one ("<sha1>" and "filename" lines) where you do not recognize the tag words (or care about that particular one) at the beginning of the "extended information" lines. That way, if there is ever added information (like the commit encoding or extended commit commentary), a blame viewer will not care. MAPPING AUTHORS top See gitmailmap(5). CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: blame.blankBoundary Show blank commit object name for boundary commits in git-blame(1). This option defaults to false. blame.coloring This determines the coloring scheme to be applied to blame output. It can be repeatedLines, highlightRecent, or none which is the default. blame.date Specifies the format used to output dates in git-blame(1). If unset the iso format is used. For supported values, see the discussion of the --date option at git-log(1). blame.showEmail Show the author email instead of author name in git-blame(1). This option defaults to false. blame.showRoot Do not treat root commits as boundaries in git-blame(1). This option defaults to false. blame.ignoreRevsFile Ignore revisions listed in the file, one unabbreviated object name per line, in git-blame(1). Whitespace and comments beginning with # are ignored. This option may be repeated multiple times. Empty file names will reset the list of ignored revisions. This option will be handled before the command line option --ignore-revs-file. blame.markUnblamableLines Mark lines that were changed by an ignored revision that we could not attribute to another commit with a * in the output of git-blame(1). blame.markIgnoredLines Mark lines that were changed by an ignored revision that we attributed to another commit with a ? in the output of git-blame(1). SEE ALSO top git-annotate(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-BLAME(1) Pages that refer to this page: git(1), git-annotate(1), git-bisect(1), git-blame(1), git-config(1), git-diff-tree(1), git-log(1), git-rev-list(1), git-show(1), gitweb.conf(5), gitworkflows(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git blame\n\n> Show commit hash and last author on each line of a file.\n> More information: <https://git-scm.com/docs/git-blame>.\n\n- Print file with author name and commit hash on each line:\n\n`git blame {{path/to/file}}`\n\n- Print file with author email and commit hash on each line:\n\n`git blame -e {{path/to/file}}`\n\n- Print file with author name and commit hash on each line at a specific commit:\n\n`git blame {{commit}} {{path/to/file}}`\n\n- Print file with author name and commit hash on each line before a specific commit:\n\n`git blame {{commit}}~ {{path/to/file}}`\n
git-branch
git-branch(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-branch(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFIGURATION | EXAMPLES | NOTES | SEE ALSO | GIT | NOTES | COLOPHON GIT-BRANCH(1) Git Manual GIT-BRANCH(1) NAME top git-branch - List, create, or delete branches SYNOPSIS top git branch [--color[=<when>] | --no-color] [--show-current] [-v [--abbrev=<n> | --no-abbrev]] [--column[=<options>] | --no-column] [--sort=<key>] [--merged [<commit>]] [--no-merged [<commit>]] [--contains [<commit>]] [--no-contains [<commit>]] [--points-at <object>] [--format=<format>] [(-r | --remotes) | (-a | --all)] [--list] [<pattern>...] git branch [--track[=(direct|inherit)] | --no-track] [-f] [--recurse-submodules] <branchname> [<start-point>] git branch (--set-upstream-to=<upstream> | -u <upstream>) [<branchname>] git branch --unset-upstream [<branchname>] git branch (-m | -M) [<oldbranch>] <newbranch> git branch (-c | -C) [<oldbranch>] <newbranch> git branch (-d | -D) [-r] <branchname>... git branch --edit-description [<branchname>] DESCRIPTION top If --list is given, or if there are no non-option arguments, existing branches are listed; the current branch will be highlighted in green and marked with an asterisk. Any branches checked out in linked worktrees will be highlighted in cyan and marked with a plus sign. Option -r causes the remote-tracking branches to be listed, and option -a shows both local and remote branches. If a <pattern> is given, it is used as a shell wildcard to restrict the output to matching branches. If multiple patterns are given, a branch is shown if it matches any of the patterns. Note that when providing a <pattern>, you must use --list; otherwise the command may be interpreted as branch creation. With --contains, shows only the branches that contain the named commit (in other words, the branches whose tip commits are descendants of the named commit), --no-contains inverts it. With --merged, only branches merged into the named commit (i.e. the branches whose tip commits are reachable from the named commit) will be listed. With --no-merged only branches not merged into the named commit will be listed. If the <commit> argument is missing it defaults to HEAD (i.e. the tip of the current branch). The commands second form creates a new branch head named <branchname> which points to the current HEAD, or <start-point> if given. As a special case, for <start-point>, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. Note that this will create the new branch, but it will not switch the working tree to it; use "git switch <newbranch>" to switch to the new branch. When a local branch is started off a remote-tracking branch, Git sets up the branch (specifically the branch.<name>.remote and branch.<name>.merge configuration entries) so that git pull will appropriately merge from the remote-tracking branch. This behavior may be changed via the global branch.autoSetupMerge configuration flag. That setting can be overridden by using the --track and --no-track options, and changed later using git branch --set-upstream-to. With a -m or -M option, <oldbranch> will be renamed to <newbranch>. If <oldbranch> had a corresponding reflog, it is renamed to match <newbranch>, and a reflog entry is created to remember the branch renaming. If <newbranch> exists, -M must be used to force the rename to happen. The -c and -C options have the exact same semantics as -m and -M, except instead of the branch being renamed, it will be copied to a new name, along with its config and reflog. With a -d or -D option, <branchname> will be deleted. You may specify more than one branch for deletion. If the branch currently has a reflog then the reflog will also be deleted. Use -r together with -d to delete remote-tracking branches. Note, that it only makes sense to delete remote-tracking branches if they no longer exist in the remote repository or if git fetch was configured not to fetch them again. See also the prune subcommand of git-remote(1) for a way to clean up all obsolete remote-tracking branches. OPTIONS top -d, --delete Delete a branch. The branch must be fully merged in its upstream branch, or in HEAD if no upstream was set with --track or --set-upstream-to. -D Shortcut for --delete --force. --create-reflog Create the branchs reflog. This activates recording of all changes made to the branch ref, enabling use of date based sha1 expressions such as "<branchname>@{yesterday}". Note that in non-bare repositories, reflogs are usually enabled by default by the core.logAllRefUpdates config option. The negated form --no-create-reflog only overrides an earlier --create-reflog, but currently does not negate the setting of core.logAllRefUpdates. -f, --force Reset <branchname> to <start-point>, even if <branchname> exists already. Without -f, git branch refuses to change an existing branch. In combination with -d (or --delete), allow deleting the branch irrespective of its merged status, or whether it even points to a valid commit. In combination with -m (or --move), allow renaming the branch even if the new branch name already exists, the same applies for -c (or --copy). Note that git branch -f <branchname> [<start-point>], even with -f, refuses to change an existing branch <branchname> that is checked out in another worktree linked to the same repository. -m, --move Move/rename a branch, together with its config and reflog. -M Shortcut for --move --force. -c, --copy Copy a branch, together with its config and reflog. -C Shortcut for --copy --force. --color[=<when>] Color branches to highlight current, local, and remote-tracking branches. The value must be always (the default), never, or auto. --no-color Turn off branch colors, even when the configuration file gives the default to color output. Same as --color=never. -i, --ignore-case Sorting and filtering branches are case insensitive. --omit-empty Do not print a newline after formatted refs where the format expands to the empty string. --column[=<options>], --no-column Display branch listing in columns. See configuration variable column.branch for option syntax. --column and --no-column without options are equivalent to always and never respectively. This option is only applicable in non-verbose mode. -r, --remotes List or delete (if used with -d) the remote-tracking branches. Combine with --list to match the optional pattern(s). -a, --all List both remote-tracking branches and local branches. Combine with --list to match optional pattern(s). -l, --list List branches. With optional <pattern>..., e.g. git branch --list 'maint-*', list only the branches that match the pattern(s). --show-current Print the name of the current branch. In detached HEAD state, nothing is printed. -v, -vv, --verbose When in list mode, show sha1 and commit subject line for each head, along with relationship to upstream branch (if any). If given twice, print the path of the linked worktree (if any) and the name of the upstream branch, as well (see also git remote show <remote>). Note that the current worktrees HEAD will not have its path printed (it will always be your current directory). -q, --quiet Be more quiet when creating or deleting a branch, suppressing non-error messages. --abbrev=<n> In the verbose listing that show the commit object name, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. The default value is 7 and can be overridden by the core.abbrev config option. --no-abbrev Display the full sha1s in the output listing rather than abbreviating them. -t, --track[=(direct|inherit)] When creating a new branch, set up branch.<name>.remote and branch.<name>.merge configuration entries to set "upstream" tracking configuration for the new branch. This configuration will tell git to show the relationship between the two branches in git status and git branch -v. Furthermore, it directs git pull without arguments to pull from the upstream when the new branch is checked out. The exact upstream branch is chosen depending on the optional argument: -t, --track, or --track=direct means to use the start-point branch itself as the upstream; --track=inherit means to copy the upstream configuration of the start-point branch. The branch.autoSetupMerge configuration variable specifies how git switch, git checkout and git branch should behave when neither --track nor --no-track are specified: The default option, true, behaves as though --track=direct were given whenever the start-point is a remote-tracking branch. false behaves as if --no-track were given. always behaves as though --track=direct were given. inherit behaves as though --track=inherit were given. simple behaves as though --track=direct were given only when the start-point is a remote-tracking branch and the new branch has the same name as the remote branch. See git-pull(1) and git-config(1) for additional discussion on how the branch.<name>.remote and branch.<name>.merge options are used. --no-track Do not set up "upstream" configuration, even if the branch.autoSetupMerge configuration variable is set. --recurse-submodules THIS OPTION IS EXPERIMENTAL! Causes the current command to recurse into submodules if submodule.propagateBranches is enabled. See submodule.propagateBranches in git-config(1). Currently, only branch creation is supported. When used in branch creation, a new branch <branchname> will be created in the superproject and all of the submodules in the superprojects <start-point>. In submodules, the branch will point to the submodule commit in the superprojects <start-point> but the branchs tracking information will be set up based on the submodules branches and remotes e.g. git branch --recurse-submodules topic origin/main will create the submodule branch "topic" that points to the submodule commit in the superprojects "origin/main", but tracks the submodules "origin/main". --set-upstream As this option had confusing syntax, it is no longer supported. Please use --track or --set-upstream-to instead. -u <upstream>, --set-upstream-to=<upstream> Set up <branchname>'s tracking information so <upstream> is considered <branchname>'s upstream branch. If no <branchname> is specified, then it defaults to the current branch. --unset-upstream Remove the upstream information for <branchname>. If no branch is specified it defaults to the current branch. --edit-description Open an editor and edit the text to explain what the branch is for, to be used by various other commands (e.g. format-patch, request-pull, and merge (if enabled)). Multi-line explanations may be used. --contains [<commit>] Only list branches which contain the specified commit (HEAD if not specified). Implies --list. --no-contains [<commit>] Only list branches which dont contain the specified commit (HEAD if not specified). Implies --list. --merged [<commit>] Only list branches whose tips are reachable from the specified commit (HEAD if not specified). Implies --list. --no-merged [<commit>] Only list branches whose tips are not reachable from the specified commit (HEAD if not specified). Implies --list. <branchname> The name of the branch to create or delete. The new branch name must pass all checks defined by git-check-ref-format(1). Some of these checks may restrict the characters allowed in a branch name. <start-point> The new branch head will point to this commit. It may be given as a branch name, a commit-id, or a tag. If this option is omitted, the current HEAD will be used instead. <oldbranch> The name of an existing branch to rename. <newbranch> The new name for an existing branch. The same restrictions as for <branchname> apply. --sort=<key> Sort based on the key given. Prefix - to sort in descending order of the value. You may use the --sort=<key> option multiple times, in which case the last key becomes the primary key. The keys supported are the same as those in git for-each-ref. Sort order defaults to the value configured for the branch.sort variable if it exists, or to sorting based on the full refname (including refs/... prefix). This lists detached HEAD (if present) first, then local branches and finally remote-tracking branches. See git-config(1). --points-at <object> Only list branches of the given object. --format <format> A string that interpolates %(fieldname) from a branch ref being shown and the object it points at. The format is the same as that of git-for-each-ref(1). CONFIGURATION top pager.branch is only respected when listing branches, i.e., when --list is used or implied. The default is to use a pager. See git-config(1). Everything above this line in this section isnt included from the git-config(1) documentation. The content that follows is the same as whats found there: branch.autoSetupMerge Tells git branch, git switch and git checkout to set up new branches so that git-pull(1) will appropriately merge from the starting point branch. Note that even if this option is not set, this behavior can be chosen per-branch using the --track and --no-track options. The valid settings are: false no automatic setup is done; true automatic setup is done when the starting point is a remote-tracking branch; always automatic setup is done when the starting point is either a local branch or remote-tracking branch; inherit if the starting point has a tracking configuration, it is copied to the new branch; simple automatic setup is done only when the starting point is a remote-tracking branch and the new branch has the same name as the remote branch. This option defaults to true. branch.autoSetupRebase When a new branch is created with git branch, git switch or git checkout that tracks another branch, this variable tells Git to set up pull to rebase instead of merge (see "branch.<name>.rebase"). When never, rebase is never automatically set to true. When local, rebase is set to true for tracked branches of other local branches. When remote, rebase is set to true for tracked branches of remote-tracking branches. When always, rebase will be set to true for all tracking branches. See "branch.autoSetupMerge" for details on how to set up a branch to track another branch. This option defaults to never. branch.sort This variable controls the sort ordering of branches when displayed by git-branch(1). Without the "--sort=<value>" option provided, the value of this variable will be used as the default. See git-for-each-ref(1) field names for valid values. branch.<name>.remote When on branch <name>, it tells git fetch and git push which remote to fetch from or push to. The remote to push to may be overridden with remote.pushDefault (for all branches). The remote to push to, for the current branch, may be further overridden by branch.<name>.pushRemote. If no remote is configured, or if you are not on any branch and there is more than one remote defined in the repository, it defaults to origin for fetching and remote.pushDefault for pushing. Additionally, . (a period) is the current local repository (a dot-repository), see branch.<name>.merge's final note below. branch.<name>.pushRemote When on branch <name>, it overrides branch.<name>.remote for pushing. It also overrides remote.pushDefault for pushing from branch <name>. When you pull from one place (e.g. your upstream) and push to another place (e.g. your own publishing repository), you would want to set remote.pushDefault to specify the remote to push to for all branches, and use this option to override it for a specific branch. branch.<name>.merge Defines, together with branch.<name>.remote, the upstream branch for the given branch. It tells git fetch/git pull/git rebase which branch to merge and can also affect git push (see push.default). When in branch <name>, it tells git fetch the default refspec to be marked for merging in FETCH_HEAD. The value is handled like the remote part of a refspec, and must match a ref which is fetched from the remote given by "branch.<name>.remote". The merge information is used by git pull (which first calls git fetch) to lookup the default branch for merging. Without this option, git pull defaults to merge the first refspec fetched. Specify multiple values to get an octopus merge. If you wish to setup git pull so that it merges into <name> from another branch in the local repository, you can point branch.<name>.merge to the desired branch, and use the relative path setting . (a period) for branch.<name>.remote. branch.<name>.mergeOptions Sets default options for merging into branch <name>. The syntax and supported options are the same as those of git-merge(1), but option values containing whitespace characters are currently not supported. branch.<name>.rebase When true, rebase the branch <name> on top of the fetched branch, instead of merging the default branch from the default remote when "git pull" is run. See "pull.rebase" for doing this in a non branch-specific manner. When merges (or just m), pass the --rebase-merges option to git rebase so that the local merge commits are included in the rebase (see git-rebase(1) for details). When the value is interactive (or just i), the rebase is run in interactive mode. NOTE: this is a possibly dangerous operation; do not use it unless you understand the implications (see git-rebase(1) for details). branch.<name>.description Branch description, can be edited with git branch --edit-description. Branch description is automatically added to the format-patch cover letter or request-pull summary. EXAMPLES top Start development from a known tag $ git clone git://git.kernel.org/pub/scm/.../linux-2.6 my2.6 $ cd my2.6 $ git branch my2.6.14 v2.6.14 (1) $ git switch my2.6.14 1. This step and the next one could be combined into a single step with "checkout -b my2.6.14 v2.6.14". Delete an unneeded branch $ git clone git://git.kernel.org/.../git.git my.git $ cd my.git $ git branch -d -r origin/todo origin/html origin/man (1) $ git branch -D test (2) 1. Delete the remote-tracking branches "todo", "html" and "man". The next fetch or pull will create them again unless you configure them not to. See git-fetch(1). 2. Delete the "test" branch even if the "master" branch (or whichever branch is currently checked out) does not have all commits from the test branch. Listing branches from a specific remote $ git branch -r -l '<remote>/<pattern>' (1) $ git for-each-ref 'refs/remotes/<remote>/<pattern>' (2) 1. Using -a would conflate <remote> with any local branches you happen to have been prefixed with the same <remote> pattern. 2. for-each-ref can take a wide range of options. See git-for-each-ref(1) Patterns will normally need quoting. NOTES top If you are creating a branch that you want to switch to immediately, it is easier to use the "git switch" command with its -c option to do the same thing with a single command. The options --contains, --no-contains, --merged and --no-merged serve four related but different purposes: --contains <commit> is used to find all branches which will need special attention if <commit> were to be rebased or amended, since those branches contain the specified <commit>. --no-contains <commit> is the inverse of that, i.e. branches that dont contain the specified <commit>. --merged is used to find all branches which can be safely deleted, since those branches are fully contained by HEAD. --no-merged is used to find branches which are candidates for merging into HEAD, since those branches are not fully contained by HEAD. When combining multiple --contains and --no-contains filters, only references that contain at least one of the --contains commits and contain none of the --no-contains commits are shown. When combining multiple --merged and --no-merged filters, only references that are reachable from at least one of the --merged commits and from none of the --no-merged commits are shown. SEE ALSO top git-check-ref-format(1), git-fetch(1), git-remote(1), Understanding history: What is a branch?[1] in the Git Users Manual. GIT top Part of the git(1) suite NOTES top 1. Understanding history: What is a branch? file:///home/mtk/share/doc/git-doc/user-manual.html#what-is-a-branch COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-BRANCH(1) Pages that refer to this page: git(1), git-branch(1), git-checkout(1), git-config(1), git-p4(1), git-pull(1), git-remote(1), git-replace(1), git-switch(1), git-worktree(1), giteveryday(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git branch\n\n> Main Git command for working with branches.\n> More information: <https://git-scm.com/docs/git-branch>.\n\n- List all branches (local and remote; the current branch is highlighted by `*`):\n\n`git branch --all`\n\n- List which branches include a specific Git commit in their history:\n\n`git branch --all --contains {{commit_hash}}`\n\n- Show the name of the current branch:\n\n`git branch --show-current`\n\n- Create new branch based on the current commit:\n\n`git branch {{branch_name}}`\n\n- Create new branch based on a specific commit:\n\n`git branch {{branch_name}} {{commit_hash}}`\n\n- Rename a branch (must not have it checked out to do this):\n\n`git branch -m {{old_branch_name}} {{new_branch_name}}`\n\n- Delete a local branch (must not have it checked out to do this):\n\n`git branch -d {{branch_name}}`\n\n- Delete a remote branch:\n\n`git push {{remote_name}} --delete {{remote_branch_name}}`\n
git-bugreport
git-bugreport(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-bugreport(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | GIT | COLOPHON GIT-BUGREPORT(1) Git Manual GIT-BUGREPORT(1) NAME top git-bugreport - Collect information for user to file a bug report SYNOPSIS top git bugreport [(-o | --output-directory) <path>] [(-s | --suffix) <format>] [--diagnose[=<mode>]] DESCRIPTION top Collects information about the users machine, Git client, and repository state, in addition to a form requesting information about the behavior the user observed, and stores it in a single text file which the user can then share, for example to the Git mailing list, in order to report an observed bug. The following information is requested from the user: Reproduction steps Expected behavior Actual behavior The following information is captured automatically: git version --build-options uname sysname, release, version, and machine strings Compiler-specific info string A list of enabled hooks $SHELL Additional information may be gathered into a separate zip archive using the --diagnose option, and can be attached alongside the bugreport document to provide additional context to readers. This tool is invoked via the typical Git setup process, which means that in some cases, it might not be able to launch - for example, if a relevant config file is unreadable. In this kind of scenario, it may be helpful to manually gather the kind of information listed above when manually asking for help. OPTIONS top -o <path>, --output-directory <path> Place the resulting bug report file in <path> instead of the current directory. -s <format>, --suffix <format> Specify an alternate suffix for the bugreport name, to create a file named git-bugreport-<formatted suffix>. This should take the form of a strftime(3) format string; the current local time will be used. --no-diagnose, --diagnose[=<mode>] Create a zip archive of supplemental information about the users machine, Git client, and repository state. The archive is written to the same output directory as the bug report and is named git-diagnostics-<formatted suffix>. Without mode specified, the diagnostic archive will contain the default set of statistics reported by git diagnose. An optional mode value may be specified to change which information is included in the archive. See git-diagnose(1) for the list of valid values for mode and details about their usage. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-BUGREPORT(1) Pages that refer to this page: git(1), git-diagnose(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git bugreport\n\n> Captures debug information from the system and user, generating a text file to aid in the reporting of a bug in Git.\n> More information: <https://git-scm.com/docs/git-bugreport>.\n\n- Create a new bug report file in the current directory:\n\n`git bugreport`\n\n- Create a new bug report file in the specified directory, creating it if it does not exist:\n\n`git bugreport --output-directory {{path/to/directory}}`\n\n- Create a new bug report file with the specified filename suffix in `strftime` format:\n\n`git bugreport --suffix {{%m%d%y}}`\n
git-bundle
git-bundle(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-bundle(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | BUNDLE FORMAT | OPTIONS | SPECIFYING REFERENCES | OBJECT PREREQUISITES | EXAMPLES | FILE FORMAT | GIT | COLOPHON GIT-BUNDLE(1) Git Manual GIT-BUNDLE(1) NAME top git-bundle - Move objects and refs by archive SYNOPSIS top git bundle create [-q | --quiet | --progress] [--version=<version>] <file> <git-rev-list-args> git bundle verify [-q | --quiet] <file> git bundle list-heads <file> [<refname>...] git bundle unbundle [--progress] <file> [<refname>...] DESCRIPTION top Create, unpack, and manipulate "bundle" files. Bundles are used for the "offline" transfer of Git objects without an active "server" sitting on the other side of the network connection. They can be used to create both incremental and full backups of a repository, and to relay the state of the references in one repository to another. Git commands that fetch or otherwise "read" via protocols such as ssh:// and https:// can also operate on bundle files. It is possible git-clone(1) a new repository from a bundle, to use git-fetch(1) to fetch from one, and to list the references contained within it with git-ls-remote(1). Theres no corresponding "write" support, i.e.a git push into a bundle is not supported. See the "EXAMPLES" section below for examples of how to use bundles. BUNDLE FORMAT top Bundles are .pack files (see git-pack-objects(1)) with a header indicating what references are contained within the bundle. Like the packed archive format itself bundles can either be self-contained, or be created using exclusions. See the "OBJECT PREREQUISITES" section below. Bundles created using revision exclusions are "thin packs" created using the --thin option to git-pack-objects(1), and unbundled using the --fix-thin option to git-index-pack(1). There is no option to create a "thick pack" when using revision exclusions, and users should not be concerned about the difference. By using "thin packs", bundles created using exclusions are smaller in size. That theyre "thin" under the hood is merely noted here as a curiosity, and as a reference to other documentation. See gitformat-bundle(5) for more details and the discussion of "thin pack" in gitformat-pack(5) for further details. OPTIONS top create [options] <file> <git-rev-list-args> Used to create a bundle named file. This requires the <git-rev-list-args> arguments to define the bundle contents. options contains the options specific to the git bundle create subcommand. If file is -, the bundle is written to stdout. verify <file> Used to check that a bundle file is valid and will apply cleanly to the current repository. This includes checks on the bundle format itself as well as checking that the prerequisite commits exist and are fully linked in the current repository. Then, git bundle prints a list of missing commits, if any. Finally, information about additional capabilities, such as "object filter", is printed. See "Capabilities" in gitformat-bundle(5) for more information. The exit code is zero for success, but will be nonzero if the bundle file is invalid. If file is -, the bundle is read from stdin. list-heads <file> Lists the references defined in the bundle. If followed by a list of references, only references matching those given are printed out. If file is -, the bundle is read from stdin. unbundle <file> Passes the objects in the bundle to git index-pack for storage in the repository, then prints the names of all defined references. If a list of references is given, only references matching those in the list are printed. This command is really plumbing, intended to be called only by git fetch. If file is -, the bundle is read from stdin. <git-rev-list-args> A list of arguments, acceptable to git rev-parse and git rev-list (and containing a named ref, see SPECIFYING REFERENCES below), that specifies the specific objects and references to transport. For example, master~10..master causes the current master reference to be packaged along with all objects added since its 10th ancestor commit. There is no explicit limit to the number of references and objects that may be packaged. [<refname>...] A list of references used to limit the references reported as available. This is principally of use to git fetch, which expects to receive only those references asked for and not necessarily everything in the pack (in this case, git bundle acts like git fetch-pack). --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --version=<version> Specify the bundle version. Version 2 is the older format and can only be used with SHA-1 repositories; the newer version 3 contains capabilities that permit extensions. The default is the oldest supported format, based on the hash algorithm in use. -q, --quiet This flag makes the command not to report its progress on the standard error stream. SPECIFYING REFERENCES top Revisions must be accompanied by reference names to be packaged in a bundle. More than one reference may be packaged, and more than one set of prerequisite objects can be specified. The objects packaged are those not contained in the union of the prerequisites. The git bundle create command resolves the reference names for you using the same rules as git rev-parse --abbrev-ref=loose. Each prerequisite can be specified explicitly (e.g. ^master~10), or implicitly (e.g. master~10..master, --since=10.days.ago master). All of these simple cases are OK (assuming we have a "master" and "next" branch): $ git bundle create master.bundle master $ echo master | git bundle create master.bundle --stdin $ git bundle create master-and-next.bundle master next $ (echo master; echo next) | git bundle create master-and-next.bundle --stdin And so are these (and the same but omitted --stdin examples): $ git bundle create recent-master.bundle master~10..master $ git bundle create recent-updates.bundle master~10..master next~5..next A revision name or a range whose right-hand-side cannot be resolved to a reference is not accepted: $ git bundle create HEAD.bundle $(git rev-parse HEAD) fatal: Refusing to create empty bundle. $ git bundle create master-yesterday.bundle master~10..master~5 fatal: Refusing to create empty bundle. OBJECT PREREQUISITES top When creating bundles it is possible to create a self-contained bundle that can be unbundled in a repository with no common history, as well as providing negative revisions to exclude objects needed in the earlier parts of the history. Feeding a revision such as new to git bundle create will create a bundle file that contains all the objects reachable from the revision new. That bundle can be unbundled in any repository to obtain a full history that leads to the revision new: $ git bundle create full.bundle new A revision range such as old..new will produce a bundle file that will require the revision old (and any objects reachable from it) to exist for the bundle to be "unbundle"-able: $ git bundle create full.bundle old..new A self-contained bundle without any prerequisites can be extracted into anywhere, even into an empty repository, or be cloned from (i.e., new, but not old..new). It is okay to err on the side of caution, causing the bundle file to contain objects already in the destination, as these are ignored when unpacking at the destination. If you want to match git clone --mirror, which would include your refs such as refs/remotes/*, use --all. If you want to provide the same set of refs that a clone directly from the source repository would get, use --branches --tags for the <git-rev-list-args>. The git bundle verify command can be used to check whether your recipient repository has the required prerequisite commits for a bundle. EXAMPLES top Assume you want to transfer the history from a repository R1 on machine A to another repository R2 on machine B. For whatever reason, direct connection between A and B is not allowed, but we can move data from A to B via some mechanism (CD, email, etc.). We want to update R2 with development made on the branch master in R1. To bootstrap the process, you can first create a bundle that does not have any prerequisites. You can use a tag to remember up to what commit you last processed, in order to make it easy to later update the other repository with an incremental bundle: machineA$ cd R1 machineA$ git bundle create file.bundle master machineA$ git tag -f lastR2bundle master Then you transfer file.bundle to the target machine B. Because this bundle does not require any existing object to be extracted, you can create a new repository on machine B by cloning from it: machineB$ git clone -b master /home/me/tmp/file.bundle R2 This will define a remote called "origin" in the resulting repository that lets you fetch and pull from the bundle. The $GIT_DIR/config file in R2 will have an entry like this: [remote "origin"] url = /home/me/tmp/file.bundle fetch = refs/heads/*:refs/remotes/origin/* To update the resulting mine.git repository, you can fetch or pull after replacing the bundle stored at /home/me/tmp/file.bundle with incremental updates. After working some more in the original repository, you can create an incremental bundle to update the other repository: machineA$ cd R1 machineA$ git bundle create file.bundle lastR2bundle..master machineA$ git tag -f lastR2bundle master You then transfer the bundle to the other machine to replace /home/me/tmp/file.bundle, and pull from it. machineB$ cd R2 machineB$ git pull If you know up to what commit the intended recipient repository should have the necessary objects, you can use that knowledge to specify the prerequisites, giving a cut-off point to limit the revisions and objects that go in the resulting bundle. The previous example used the lastR2bundle tag for this purpose, but you can use any other options that you would give to the git-log(1) command. Here are more examples: You can use a tag that is present in both: $ git bundle create mybundle v1.0.0..master You can use a prerequisite based on time: $ git bundle create mybundle --since=10.days master You can use the number of commits: $ git bundle create mybundle -10 master You can run git-bundle verify to see if you can extract from a bundle that was created with a prerequisite: $ git bundle verify mybundle This will list what commits you must have in order to extract from the bundle and will error out if you do not have them. A bundle from a recipient repositorys point of view is just like a regular repository which it fetches or pulls from. You can, for example, map references when fetching: $ git fetch mybundle master:localRef You can also see what references it offers: $ git ls-remote mybundle FILE FORMAT top See gitformat-bundle(5). GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-BUNDLE(1) Pages that refer to this page: dpkg-source(1), git(1), git-clone(1), git-fast-export(1), git-fetch(1), git-pack-objects(1), git-pull(1), git-push(1), gitformat-bundle(5), gitprotocol-v2(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git bundle\n\n> Package objects and references into an archive.\n> More information: <https://git-scm.com/docs/git-bundle>.\n\n- Create a bundle file that contains all objects and references of a specific branch:\n\n`git bundle create {{path/to/file.bundle}} {{branch_name}}`\n\n- Create a bundle file of all branches:\n\n`git bundle create {{path/to/file.bundle}} --all`\n\n- Create a bundle file of the last 5 commits of the current branch:\n\n`git bundle create {{path/to/file.bundle}} -{{5}} {{HEAD}}`\n\n- Create a bundle file of the latest 7 days:\n\n`git bundle create {{path/to/file.bundle}} --since={{7.days}} {{HEAD}}`\n\n- Verify that a bundle file is valid and can be applied to the current repository:\n\n`git bundle verify {{path/to/file.bundle}}`\n\n- Print to `stdout` the list of references contained in a bundle:\n\n`git bundle unbundle {{path/to/file.bundle}}`\n\n- Unbundle a specific branch from a bundle file into the current repository:\n\n`git pull {{path/to/file.bundle}} {{branch_name}}`\n
git-cat-file
git-cat-file(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-cat-file(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OUTPUT | BATCH OUTPUT | CAVEATS | GIT | COLOPHON GIT-CAT-FILE(1) Git Manual GIT-CAT-FILE(1) NAME top git-cat-file - Provide contents or details of repository objects SYNOPSIS top git cat-file <type> <object> git cat-file (-e | -p) <object> git cat-file (-t | -s) [--allow-unknown-type] <object> git cat-file (--textconv | --filters) [<rev>:<path|tree-ish> | --path=<path|tree-ish> <rev>] git cat-file (--batch | --batch-check | --batch-command) [--batch-all-objects] [--buffer] [--follow-symlinks] [--unordered] [--textconv | --filters] [-Z] DESCRIPTION top Output the contents or other properties such as size, type or delta information of one or more objects. This command can operate in two modes, depending on whether an option from the --batch family is specified. In non-batch mode, the command provides information on an object named on the command line. In batch mode, arguments are read from standard input. OPTIONS top <object> The name of the object to show. For a more complete list of ways to spell object names, see the "SPECIFYING REVISIONS" section in gitrevisions(7). -t Instead of the content, show the object type identified by <object>. -s Instead of the content, show the object size identified by <object>. If used with --use-mailmap option, will show the size of updated object after replacing idents using the mailmap mechanism. -e Exit with zero status if <object> exists and is a valid object. If <object> is of an invalid format, exit with non-zero status and emit an error on stderr. -p Pretty-print the contents of <object> based on its type. <type> Typically this matches the real type of <object> but asking for a type that can trivially be dereferenced from the given <object> is also permitted. An example is to ask for a "tree" with <object> being a commit object that contains it, or to ask for a "blob" with <object> being a tag object that points at it. --[no-]mailmap, --[no-]use-mailmap Use mailmap file to map author, committer and tagger names and email addresses to canonical real names and email addresses. See git-shortlog(1). --textconv Show the content as transformed by a textconv filter. In this case, <object> has to be of the form <tree-ish>:<path>, or :<path> in order to apply the filter to the content recorded in the index at <path>. --filters Show the content as converted by the filters configured in the current working tree for the given <path> (i.e. smudge filters, end-of-line conversion, etc). In this case, <object> has to be of the form <tree-ish>:<path>, or :<path>. --path=<path> For use with --textconv or --filters, to allow specifying an object name and a path separately, e.g. when it is difficult to figure out the revision from which the blob came. --batch, --batch=<format> Print object information and contents for each object provided on stdin. May not be combined with any other options or arguments except --textconv, --filters, or --use-mailmap. When used with --textconv or --filters, the input lines must specify the path, separated by whitespace. See the section BATCH OUTPUT below for details. When used with --use-mailmap, for commit and tag objects, the contents part of the output shows the identities replaced using the mailmap mechanism, while the information part of the output shows the size of the object as if it actually recorded the replacement identities. --batch-check, --batch-check=<format> Print object information for each object provided on stdin. May not be combined with any other options or arguments except --textconv, --filters or --use-mailmap. When used with --textconv or --filters, the input lines must specify the path, separated by whitespace. See the section BATCH OUTPUT below for details. When used with --use-mailmap, for commit and tag objects, the printed object information shows the size of the object as if the identities recorded in it were replaced by the mailmap mechanism. --batch-command, --batch-command=<format> Enter a command mode that reads commands and arguments from stdin. May only be combined with --buffer, --textconv, --use-mailmap or --filters. When used with --textconv or --filters, the input lines must specify the path, separated by whitespace. See the section BATCH OUTPUT below for details. When used with --use-mailmap, for commit and tag objects, the contents command shows the identities replaced using the mailmap mechanism, while the info command shows the size of the object as if it actually recorded the replacement identities. --batch-command recognizes the following commands: contents <object> Print object contents for object reference <object>. This corresponds to the output of --batch. info <object> Print object info for object reference <object>. This corresponds to the output of --batch-check. flush Used with --buffer to execute all preceding commands that were issued since the beginning or since the last flush was issued. When --buffer is used, no output will come until a flush is issued. When --buffer is not used, commands are flushed each time without issuing flush. --batch-all-objects Instead of reading a list of objects on stdin, perform the requested batch operation on all objects in the repository and any alternate object stores (not just reachable objects). Requires --batch or --batch-check be specified. By default, the objects are visited in order sorted by their hashes; see also --unordered below. Objects are presented as-is, without respecting the "replace" mechanism of git-replace(1). --buffer Normally batch output is flushed after each object is output, so that a process can interactively read and write from cat-file. With this option, the output uses normal stdio buffering; this is much more efficient when invoking --batch-check or --batch-command on a large number of objects. --unordered When --batch-all-objects is in use, visit objects in an order which may be more efficient for accessing the object contents than hash order. The exact details of the order are unspecified, but if you do not require a specific order, this should generally result in faster output, especially with --batch. Note that cat-file will still show each object only once, even if it is stored multiple times in the repository. --allow-unknown-type Allow -s or -t to query broken/corrupt objects of unknown type. --follow-symlinks With --batch or --batch-check, follow symlinks inside the repository when requesting objects with extended SHA-1 expressions of the form tree-ish:path-in-tree. Instead of providing output about the link itself, provide output about the linked-to object. If a symlink points outside the tree-ish (e.g. a link to /foo or a root-level link to ../foo), the portion of the link which is outside the tree will be printed. This option does not (currently) work correctly when an object in the index is specified (e.g. :link instead of HEAD:link) rather than one in the tree. This option cannot (currently) be used unless --batch or --batch-check is used. For example, consider a git repository containing: f: a file containing "hello\n" link: a symlink to f dir/link: a symlink to ../f plink: a symlink to ../f alink: a symlink to /etc/passwd For a regular file f, echo HEAD:f | git cat-file --batch would print ce013625030ba8dba906f756967f9e9ca394464a blob 6 And echo HEAD:link | git cat-file --batch --follow-symlinks would print the same thing, as would HEAD:dir/link, as they both point at HEAD:f. Without --follow-symlinks, these would print data about the symlink itself. In the case of HEAD:link, you would see 4d1ae35ba2c8ec712fa2a379db44ad639ca277bd blob 1 Both plink and alink point outside the tree, so they would respectively print: symlink 4 ../f symlink 11 /etc/passwd -Z Only meaningful with --batch, --batch-check, or --batch-command; input and output is NUL-delimited instead of newline-delimited. -z Only meaningful with --batch, --batch-check, or --batch-command; input is NUL-delimited instead of newline-delimited. This option is deprecated in favor of -Z as the output can otherwise be ambiguous. OUTPUT top If -t is specified, one of the <type>. If -s is specified, the size of the <object> in bytes. If -e is specified, no output, unless the <object> is malformed. If -p is specified, the contents of <object> are pretty-printed. If <type> is specified, the raw (though uncompressed) contents of the <object> will be returned. BATCH OUTPUT top If --batch or --batch-check is given, cat-file will read objects from stdin, one per line, and print information about them. By default, the whole line is considered as an object, as if it were fed to git-rev-parse(1). When --batch-command is given, cat-file will read commands from stdin, one per line, and print information based on the command given. With --batch-command, the info command followed by an object will print information about the object the same way --batch-check would, and the contents command followed by an object prints contents in the same way --batch would. You can specify the information shown for each object by using a custom <format>. The <format> is copied literally to stdout for each object, with placeholders of the form %(atom) expanded, followed by a newline. The available atoms are: objectname The full hex representation of the object name. objecttype The type of the object (the same as cat-file -t reports). objectsize The size, in bytes, of the object (the same as cat-file -s reports). objectsize:disk The size, in bytes, that the object takes up on disk. See the note about on-disk sizes in the CAVEATS section below. deltabase If the object is stored as a delta on-disk, this expands to the full hex representation of the delta base object name. Otherwise, expands to the null OID (all zeroes). See CAVEATS below. rest If this atom is used in the output string, input lines are split at the first whitespace boundary. All characters before that whitespace are considered to be the object name; characters after that first run of whitespace (i.e., the "rest" of the line) are output in place of the %(rest) atom. If no format is specified, the default format is %(objectname) %(objecttype) %(objectsize). If --batch is specified, or if --batch-command is used with the contents command, the object information is followed by the object contents (consisting of %(objectsize) bytes), followed by a newline. For example, --batch without a custom format would produce: <oid> SP <type> SP <size> LF <contents> LF Whereas --batch-check='%(objectname) %(objecttype)' would produce: <oid> SP <type> LF If a name is specified on stdin that cannot be resolved to an object in the repository, then cat-file will ignore any custom format and print: <object> SP missing LF If a name is specified that might refer to more than one object (an ambiguous short sha), then cat-file will ignore any custom format and print: <object> SP ambiguous LF If --follow-symlinks is used, and a symlink in the repository points outside the repository, then cat-file will ignore any custom format and print: symlink SP <size> LF <symlink> LF The symlink will either be absolute (beginning with a /), or relative to the tree root. For instance, if dir/link points to ../../foo, then <symlink> will be ../foo. <size> is the size of the symlink in bytes. If --follow-symlinks is used, the following error messages will be displayed: <object> SP missing LF is printed when the initial symlink requested does not exist. dangling SP <size> LF <object> LF is printed when the initial symlink exists, but something that it (transitive-of) points to does not. loop SP <size> LF <object> LF is printed for symlink loops (or any symlinks that require more than 40 link resolutions to resolve). notdir SP <size> LF <object> LF is printed when, during symlink resolution, a file is used as a directory name. Alternatively, when -Z is passed, the line feeds in any of the above examples are replaced with NUL terminators. This ensures that output will be parsable if the output itself would contain a linefeed and is thus recommended for scripting purposes. CAVEATS top Note that the sizes of objects on disk are reported accurately, but care should be taken in drawing conclusions about which refs or objects are responsible for disk usage. The size of a packed non-delta object may be much larger than the size of objects which delta against it, but the choice of which object is the base and which is the delta is arbitrary and is subject to change during a repack. Note also that multiple copies of an object may be present in the object database; in this case, it is undefined which copys size or delta base will be reported. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CAT-FILE(1) Pages that refer to this page: git(1), git-rev-list(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git cat-file\n\n> Provide content or type and size information for Git repository objects.\n> More information: <https://git-scm.com/docs/git-cat-file>.\n\n- Get the [s]ize of the HEAD commit in bytes:\n\n`git cat-file -s HEAD`\n\n- Get the [t]ype (blob, tree, commit, tag) of a given Git object:\n\n`git cat-file -t {{8c442dc3}}`\n\n- Pretty-[p]rint the contents of a given Git object based on its type:\n\n`git cat-file -p {{HEAD~2}}`\n
git-check-attr
git-check-attr(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-check-attr(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OUTPUT | EXAMPLES | SEE ALSO | GIT | COLOPHON GIT-CHECK-ATTR(1) Git Manual GIT-CHECK-ATTR(1) NAME top git-check-attr - Display gitattributes information SYNOPSIS top git check-attr [--source <tree-ish>] [-a | --all | <attr>...] [--] <pathname>... git check-attr --stdin [-z] [--source <tree-ish>] [-a | --all | <attr>...] DESCRIPTION top For every pathname, this command will list if each attribute is unspecified, set, or unset as a gitattribute on that pathname. OPTIONS top -a, --all List all attributes that are associated with the specified paths. If this option is used, then unspecified attributes will not be included in the output. --cached Consider .gitattributes in the index only, ignoring the working tree. --stdin Read pathnames from the standard input, one per line, instead of from the command line. -z The output format is modified to be machine-parsable. If --stdin is also given, input paths are separated with a NUL character instead of a linefeed character. --source=<tree-ish> Check attributes against the specified tree-ish. It is common to specify the source tree by naming a commit, branch, or tag associated with it. -- Interpret all preceding arguments as attributes and all following arguments as path names. If none of --stdin, --all, or -- is used, the first argument will be treated as an attribute and the rest of the arguments as pathnames. OUTPUT top The output is of the form: <path> COLON SP <attribute> COLON SP <info> LF unless -z is in effect, in which case NUL is used as delimiter: <path> NUL <attribute> NUL <info> NUL <path> is the path of a file being queried, <attribute> is an attribute being queried, and <info> can be either: unspecified when the attribute is not defined for the path. unset when the attribute is defined as false. set when the attribute is defined as true. <value> when a value has been assigned to the attribute. Buffering happens as documented under the GIT_FLUSH option in git(1). The caller is responsible for avoiding deadlocks caused by overfilling an input buffer or reading from an empty output buffer. EXAMPLES top In the examples, the following .gitattributes file is used: *.java diff=java -crlf myAttr NoMyAttr.java !myAttr README caveat=unspecified Listing a single attribute: $ git check-attr diff org/example/MyClass.java org/example/MyClass.java: diff: java Listing multiple attributes for a file: $ git check-attr crlf diff myAttr -- org/example/MyClass.java org/example/MyClass.java: crlf: unset org/example/MyClass.java: diff: java org/example/MyClass.java: myAttr: set Listing all attributes for a file: $ git check-attr --all -- org/example/MyClass.java org/example/MyClass.java: diff: java org/example/MyClass.java: myAttr: set Listing an attribute for multiple files: $ git check-attr myAttr -- org/example/MyClass.java org/example/NoMyAttr.java org/example/MyClass.java: myAttr: set org/example/NoMyAttr.java: myAttr: unspecified Not all values are equally unambiguous: $ git check-attr caveat README README: caveat: unspecified SEE ALSO top gitattributes(5). GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CHECK-ATTR(1) Pages that refer to this page: git(1), gitattributes(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git check-attr\n\n> For every pathname, list if each attribute is unspecified, set, or unset as a gitattribute on that pathname.\n> More information: <https://git-scm.com/docs/git-check-attr>.\n\n- Check the values of all attributes on a file:\n\n`git check-attr --all {{path/to/file}}`\n\n- Check the value of a specific attribute on a file:\n\n`git check-attr {{attribute}} {{path/to/file}}`\n\n- Check the values of all attributes on specific files:\n\n`git check-attr --all {{path/to/file1 path/to/file2 ...}}`\n\n- Check the value of a specific attribute on one or more files:\n\n`git check-attr {{attribute}} {{path/to/file1 path/to/file2 ...}}`\n
git-check-ignore
git-check-ignore(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-check-ignore(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OUTPUT | EXIT STATUS | SEE ALSO | GIT | COLOPHON GIT-CHECK-IGNORE(1) Git Manual GIT-CHECK-IGNORE(1) NAME top git-check-ignore - Debug gitignore / exclude files SYNOPSIS top git check-ignore [<options>] <pathname>... git check-ignore [<options>] --stdin DESCRIPTION top For each pathname given via the command-line or from a file via --stdin, check whether the file is excluded by .gitignore (or other input files to the exclude mechanism) and output the path if it is excluded. By default, tracked files are not shown at all since they are not subject to exclude rules; but see --no-index. OPTIONS top -q, --quiet Dont output anything, just set exit status. This is only valid with a single pathname. -v, --verbose Instead of printing the paths that are excluded, for each path that matches an exclude pattern, print the exclude pattern together with the path. (Matching an exclude pattern usually means the path is excluded, but if the pattern begins with "!" then it is a negated pattern and matching it means the path is NOT excluded.) For precedence rules within and between exclude sources, see gitignore(5). --stdin Read pathnames from the standard input, one per line, instead of from the command-line. -z The output format is modified to be machine-parsable (see below). If --stdin is also given, input paths are separated with a NUL character instead of a linefeed character. -n, --non-matching Show given paths which dont match any pattern. This only makes sense when --verbose is enabled, otherwise it would not be possible to distinguish between paths which match a pattern and those which dont. --no-index Dont look in the index when undertaking the checks. This can be used to debug why a path became tracked by e.g. git add . and was not ignored by the rules as expected by the user or when developing patterns including negation to match a path previously added with git add -f. OUTPUT top By default, any of the given pathnames which match an ignore pattern will be output, one per line. If no pattern matches a given path, nothing will be output for that path; this means that path will not be ignored. If --verbose is specified, the output is a series of lines of the form: <source> <COLON> <linenum> <COLON> <pattern> <HT> <pathname> <pathname> is the path of a file being queried, <pattern> is the matching pattern, <source> is the patterns source file, and <linenum> is the line number of the pattern within that source. If the pattern contained a "!" prefix or "/" suffix, it will be preserved in the output. <source> will be an absolute path when referring to the file configured by core.excludesFile, or relative to the repository root when referring to .git/info/exclude or a per-directory exclude file. If -z is specified, the pathnames in the output are delimited by the null character; if --verbose is also specified then null characters are also used instead of colons and hard tabs: <source> <NULL> <linenum> <NULL> <pattern> <NULL> <pathname> <NULL> If -n or --non-matching are specified, non-matching pathnames will also be output, in which case all fields in each output record except for <pathname> will be empty. This can be useful when running non-interactively, so that files can be incrementally streamed to STDIN of a long-running check-ignore process, and for each of these files, STDOUT will indicate whether that file matched a pattern or not. (Without this option, it would be impossible to tell whether the absence of output for a given file meant that it didnt match any pattern, or that the output hadnt been generated yet.) Buffering happens as documented under the GIT_FLUSH option in git(1). The caller is responsible for avoiding deadlocks caused by overfilling an input buffer or reading from an empty output buffer. EXIT STATUS top 0 One or more of the provided paths is ignored. 1 None of the provided paths are ignored. 128 A fatal error was encountered. SEE ALSO top gitignore(5) git-config(1) git-ls-files(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CHECK-IGNORE(1) Pages that refer to this page: git(1), gitignore(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git check-ignore\n\n> Analyze and debug Git ignore/exclude (".gitignore") files.\n> More information: <https://git-scm.com/docs/git-check-ignore>.\n\n- Check whether a file or directory is ignored:\n\n`git check-ignore {{path/to/file_or_directory}}`\n\n- Check whether multiple files or directories are ignored:\n\n`git check-ignore {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`\n\n- Use pathnames, one per line, from `stdin`:\n\n`git check-ignore --stdin < {{path/to/file_list}}`\n\n- Do not check the index (used to debug why paths were tracked and not ignored):\n\n`git check-ignore --no-index {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`\n\n- Include details about the matching pattern for each path:\n\n`git check-ignore --verbose {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`\n
git-check-mailmap
git-check-mailmap(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-check-mailmap(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OUTPUT | CONFIGURATION | MAPPING AUTHORS | GIT | COLOPHON GIT-CHECK-MAILMAP(1) Git Manual GIT-CHECK-MAILMAP(1) NAME top git-check-mailmap - Show canonical names and email addresses of contacts SYNOPSIS top git check-mailmap [<options>] <contact>... DESCRIPTION top For each Name <user@host> or <user@host> from the command-line or standard input (when using --stdin), look up the persons canonical name and email address (see "Mapping Authors" below). If found, print them; otherwise print the input as-is. OPTIONS top --stdin Read contacts, one per line, from the standard input after exhausting contacts provided on the command-line. OUTPUT top For each contact, a single line is output, terminated by a newline. If the name is provided or known to the mailmap, Name <user@host> is printed; otherwise only <user@host> is printed. CONFIGURATION top See mailmap.file and mailmap.blob in git-config(1) for how to specify a custom .mailmap target file or object. MAPPING AUTHORS top See gitmailmap(5). GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CHECK-MAILMAP(1) Pages that refer to this page: git(1), gitmailmap(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git check-mailmap\n\n> Show canonical names and email addresses of contacts.\n> More information: <https://git-scm.com/docs/git-check-mailmap>.\n\n- Look up the canonical name associated with an email address:\n\n`git check-mailmap "<{{email@example.com}}>"`\n
git-check-ref-format
git-check-ref-format(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-check-ref-format(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | GIT | COLOPHON GIT-CHECK-REF-FORMAT(1) Git Manual GIT-CHECK-REF-FORMAT(1) NAME top git-check-ref-format - Ensures that a reference name is well formed SYNOPSIS top git check-ref-format [--normalize] [--[no-]allow-onelevel] [--refspec-pattern] <refname> git check-ref-format --branch <branchname-shorthand> DESCRIPTION top Checks if a given refname is acceptable, and exits with a non-zero status if it is not. A reference is used in Git to specify branches and tags. A branch head is stored in the refs/heads hierarchy, while a tag is stored in the refs/tags hierarchy of the ref namespace (typically in $GIT_DIR/refs/heads and $GIT_DIR/refs/tags directories or, as entries in file $GIT_DIR/packed-refs if refs are packed by git gc). Git imposes the following rules on how references are named: 1. They can include slash / for hierarchical (directory) grouping, but no slash-separated component can begin with a dot . or end with the sequence .lock. 2. They must contain at least one /. This enforces the presence of a category like heads/, tags/ etc. but the actual names are not restricted. If the --allow-onelevel option is used, this rule is waived. 3. They cannot have two consecutive dots .. anywhere. 4. They cannot have ASCII control characters (i.e. bytes whose values are lower than \040, or \177 DEL), space, tilde ~, caret ^, or colon : anywhere. 5. They cannot have question-mark ?, asterisk *, or open bracket [ anywhere. See the --refspec-pattern option below for an exception to this rule. 6. They cannot begin or end with a slash / or contain multiple consecutive slashes (see the --normalize option below for an exception to this rule). 7. They cannot end with a dot .. 8. They cannot contain a sequence @{. 9. They cannot be the single character @. 10. They cannot contain a \. These rules make it easy for shell script based tools to parse reference names, pathname expansion by the shell when a reference name is used unquoted (by mistake), and also avoid ambiguities in certain reference name expressions (see gitrevisions(7)): 1. A double-dot .. is often used as in ref1..ref2, and in some contexts this notation means ^ref1 ref2 (i.e. not in ref1 and in ref2). 2. A tilde ~ and caret ^ are used to introduce the postfix nth parent and peel onion operation. 3. A colon : is used as in srcref:dstref to mean "use srcrefs value and store it in dstref" in fetch and push operations. It may also be used to select a specific object such as with git cat-file: "git cat-file blob v1.3.3:refs.c". 4. at-open-brace @{ is used as a notation to access a reflog entry. With the --branch option, the command takes a name and checks if it can be used as a valid branch name (e.g. when creating a new branch). But be cautious when using the previous checkout syntax that may refer to a detached HEAD state. The rule git check-ref-format --branch $name implements may be stricter than what git check-ref-format refs/heads/$name says (e.g. a dash may appear at the beginning of a ref component, but it is explicitly forbidden at the beginning of a branch name). When run with the --branch option in a repository, the input is first expanded for the previous checkout syntax @{-n}. For example, @{-1} is a way to refer the last thing that was checked out using "git switch" or "git checkout" operation. This option should be used by porcelains to accept this syntax anywhere a branch name is expected, so they can act as if you typed the branch name. As an exception note that, the previous checkout operation might result in a commit object name when the N-th last thing checked out was not a branch. OPTIONS top --[no-]allow-onelevel Controls whether one-level refnames are accepted (i.e., refnames that do not contain multiple /-separated components). The default is --no-allow-onelevel. --refspec-pattern Interpret <refname> as a reference name pattern for a refspec (as used with remote repositories). If this option is enabled, <refname> is allowed to contain a single * in the refspec (e.g., foo/bar*/baz or foo/bar*baz/ but not foo/bar*/baz*). --normalize Normalize refname by removing any leading slash (/) characters and collapsing runs of adjacent slashes between name components into a single slash. If the normalized refname is valid then print it to standard output and exit with a status of 0, otherwise exit with a non-zero status. (--print is a deprecated way to spell --normalize.) EXAMPLES top Print the name of the previous thing checked out: $ git check-ref-format --branch @{-1} Determine the reference name to use for a new branch: $ ref=$(git check-ref-format --normalize "refs/heads/$newbranch")|| { echo "we do not like '$newbranch' as a branch name." >&2 ; exit 1 ; } GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CHECK-REF-FORMAT(1) Pages that refer to this page: git(1), git-branch(1), git-ls-remote(1), git-tag(1), stg-new(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git check-ref-format\n\n> Check if a reference name is acceptable, and exit with a non-zero status if it is not.\n> More information: <https://git-scm.com/docs/git-check-ref-format>.\n\n- Check the format of the specified reference name:\n\n`git check-ref-format {{refs/head/refname}}`\n\n- Print the name of the last branch checked out:\n\n`git check-ref-format --branch @{-1}`\n\n- Normalize a refname:\n\n`git check-ref-format --normalize {{refs/head/refname}}`\n
git-checkout
git-checkout(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-checkout(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | DETACHED HEAD | ARGUMENT DISAMBIGUATION | EXAMPLES | CONFIGURATION | SEE ALSO | GIT | COLOPHON GIT-CHECKOUT(1) Git Manual GIT-CHECKOUT(1) NAME top git-checkout - Switch branches or restore working tree files SYNOPSIS top git checkout [-q] [-f] [-m] [<branch>] git checkout [-q] [-f] [-m] --detach [<branch>] git checkout [-q] [-f] [-m] [--detach] <commit> git checkout [-q] [-f] [-m] [[-b|-B|--orphan] <new-branch>] [<start-point>] git checkout [-f] <tree-ish> [--] <pathspec>... git checkout [-f] <tree-ish> --pathspec-from-file=<file> [--pathspec-file-nul] git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [--] <pathspec>... git checkout [-f|--ours|--theirs|-m|--conflict=<style>] --pathspec-from-file=<file> [--pathspec-file-nul] git checkout (-p|--patch) [<tree-ish>] [--] [<pathspec>...] DESCRIPTION top Updates files in the working tree to match the version in the index or the specified tree. If no pathspec was given, git checkout will also update HEAD to set the specified branch as the current branch. git checkout [<branch>] To prepare for working on <branch>, switch to it by updating the index and the files in the working tree, and by pointing HEAD at the branch. Local modifications to the files in the working tree are kept, so that they can be committed to the <branch>. If <branch> is not found but there does exist a tracking branch in exactly one remote (call it <remote>) with a matching name and --no-guess is not specified, treat as equivalent to $ git checkout -b <branch> --track <remote>/<branch> You could omit <branch>, in which case the command degenerates to "check out the current branch", which is a glorified no-op with rather expensive side-effects to show only the tracking information, if it exists, for the current branch. git checkout -b|-B <new-branch> [<start-point>] Specifying -b causes a new branch to be created as if git-branch(1) were called and then checked out. In this case you can use the --track or --no-track options, which will be passed to git branch. As a convenience, --track without -b implies branch creation; see the description of --track below. If -B is given, <new-branch> is created if it doesnt exist; otherwise, it is reset. This is the transactional equivalent of $ git branch -f <branch> [<start-point>] $ git checkout <branch> that is to say, the branch is not reset/created unless "git checkout" is successful. git checkout --detach [<branch>], git checkout [--detach] <commit> Prepare to work on top of <commit>, by detaching HEAD at it (see "DETACHED HEAD" section), and updating the index and the files in the working tree. Local modifications to the files in the working tree are kept, so that the resulting working tree will be the state recorded in the commit plus the local modifications. When the <commit> argument is a branch name, the --detach option can be used to detach HEAD at the tip of the branch (git checkout <branch> would check out that branch without detaching HEAD). Omitting <branch> detaches HEAD at the tip of the current branch. git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] [--] <pathspec>..., git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] --pathspec-from-file=<file> [--pathspec-file-nul] Overwrite the contents of the files that match the pathspec. When the <tree-ish> (most often a commit) is not given, overwrite working tree with the contents in the index. When the <tree-ish> is given, overwrite both the index and the working tree with the contents at the <tree-ish>. The index may contain unmerged entries because of a previous failed merge. By default, if you try to check out such an entry from the index, the checkout operation will fail and nothing will be checked out. Using -f will ignore these unmerged entries. The contents from a specific side of the merge can be checked out of the index by using --ours or --theirs. With -m, changes made to the working tree file can be discarded to re-create the original conflicted merge result. git checkout (-p|--patch) [<tree-ish>] [--] [<pathspec>...] This is similar to the previous mode, but lets you use the interactive interface to show the "diff" output and choose which hunks to use in the result. See below for the description of --patch option. OPTIONS top -q, --quiet Quiet, suppress feedback messages. --progress, --no-progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --quiet is specified. This flag enables progress reporting even if not attached to a terminal, regardless of --quiet. -f, --force When switching branches, proceed even if the index or the working tree differs from HEAD, and even if there are untracked files in the way. This is used to throw away local changes and any untracked files or directories that are in the way. When checking out paths from the index, do not fail upon unmerged entries; instead, unmerged entries are ignored. --ours, --theirs When checking out paths from the index, check out stage #2 (ours) or #3 (theirs) for unmerged paths. Note that during git rebase and git pull --rebase, ours and theirs may appear swapped; --ours gives the version from the branch the changes are rebased onto, while --theirs gives the version from the branch that holds your work that is being rebased. This is because rebase is used in a workflow that treats the history at the remote as the shared canonical one, and treats the work done on the branch you are rebasing as the third-party work to be integrated, and you are temporarily assuming the role of the keeper of the canonical history during the rebase. As the keeper of the canonical history, you need to view the history from the remote as ours (i.e. "our shared canonical history"), while what you did on your side branch as theirs (i.e. "one contributors work on top of it"). -b <new-branch> Create a new branch named <new-branch>, start it at <start-point>, and check the resulting branch out; see git-branch(1) for details. -B <new-branch> Creates the branch <new-branch>, start it at <start-point>; if it already exists, then reset it to <start-point>. And then check the resulting branch out. This is equivalent to running "git branch" with "-f" followed by "git checkout" of that branch; see git-branch(1) for details. -t, --track[=(direct|inherit)] When creating a new branch, set up "upstream" configuration. See "--track" in git-branch(1) for details. If no -b option is given, the name of the new branch will be derived from the remote-tracking branch, by looking at the local part of the refspec configured for the corresponding remote, and then stripping the initial part up to the "*". This would tell us to use hack as the local branch when branching off of origin/hack (or remotes/origin/hack, or even refs/remotes/origin/hack). If the given name has no slash, or the above guessing results in an empty name, the guessing is aborted. You can explicitly give a name with -b in such a case. --no-track Do not set up "upstream" configuration, even if the branch.autoSetupMerge configuration variable is true. --guess, --no-guess If <branch> is not found but there does exist a tracking branch in exactly one remote (call it <remote>) with a matching name, treat as equivalent to $ git checkout -b <branch> --track <remote>/<branch> If the branch exists in multiple remotes and one of them is named by the checkout.defaultRemote configuration variable, well use that one for the purposes of disambiguation, even if the <branch> isnt unique across all remotes. Set it to e.g. checkout.defaultRemote=origin to always checkout remote branches from there if <branch> is ambiguous but exists on the origin remote. See also checkout.defaultRemote in git-config(1). --guess is the default behavior. Use --no-guess to disable it. The default behavior can be set via the checkout.guess configuration variable. -l Create the new branchs reflog; see git-branch(1) for details. -d, --detach Rather than checking out a branch to work on it, check out a commit for inspection and discardable experiments. This is the default behavior of git checkout <commit> when <commit> is not a branch name. See the "DETACHED HEAD" section below for details. --orphan <new-branch> Create a new orphan branch, named <new-branch>, started from <start-point> and switch to it. The first commit made on this new branch will have no parents and it will be the root of a new history totally disconnected from all the other branches and commits. The index and the working tree are adjusted as if you had previously run git checkout <start-point>. This allows you to start a new history that records a set of paths similar to <start-point> by easily running git commit -a to make the root commit. This can be useful when you want to publish the tree from a commit without exposing its full history. You might want to do this to publish an open source branch of a project whose current tree is "clean", but whose full history contains proprietary or otherwise encumbered bits of code. If you want to start a disconnected history that records a set of paths that is totally different from the one of <start-point>, then you should clear the index and the working tree right after creating the orphan branch by running git rm -rf . from the top level of the working tree. Afterwards you will be ready to prepare your new files, repopulating the working tree, by copying them from elsewhere, extracting a tarball, etc. --ignore-skip-worktree-bits In sparse checkout mode, git checkout -- <paths> would update only entries matched by <paths> and sparse patterns in $GIT_DIR/info/sparse-checkout. This option ignores the sparse patterns and adds back any files in <paths>. -m, --merge When switching branches, if you have local modifications to one or more files that are different between the current branch and the branch to which you are switching, the command refuses to switch branches in order to preserve your modifications in context. However, with this option, a three-way merge between the current branch, your working tree contents, and the new branch is done, and you will be on the new branch. When a merge conflict happens, the index entries for conflicting paths are left unmerged, and you need to resolve the conflicts and mark the resolved paths with git add (or git rm if the merge should result in deletion of the path). When checking out paths from the index, this option lets you recreate the conflicted merge in the specified paths. This option cannot be used when checking out paths from a tree-ish. When switching branches with --merge, staged changes may be lost. --conflict=<style> The same as --merge option above, but changes the way the conflicting hunks are presented, overriding the merge.conflictStyle configuration variable. Possible values are "merge" (default), "diff3", and "zdiff3". -p, --patch Interactively select hunks in the difference between the <tree-ish> (or the index, if unspecified) and the working tree. The chosen hunks are then applied in reverse to the working tree (and if a <tree-ish> was specified, the index). This means that you can use git checkout -p to selectively discard edits from your current working tree. See the Interactive Mode section of git-add(1) to learn how to operate the --patch mode. Note that this option uses the no overlay mode by default (see also --overlay), and currently doesnt support overlay mode. --ignore-other-worktrees git checkout refuses when the wanted ref is already checked out by another worktree. This option makes it check the ref out anyway. In other words, the ref can be held by more than one worktree. --overwrite-ignore, --no-overwrite-ignore Silently overwrite ignored files when switching branches. This is the default behavior. Use --no-overwrite-ignore to abort the operation when the new branch contains ignored files. --recurse-submodules, --no-recurse-submodules Using --recurse-submodules will update the content of all active submodules according to the commit recorded in the superproject. If local modifications in a submodule would be overwritten the checkout will fail unless -f is used. If nothing (or --no-recurse-submodules) is used, submodules working trees will not be updated. Just like git-submodule(1), this will detach HEAD of the submodule. --overlay, --no-overlay In the default overlay mode, git checkout never removes files from the index or the working tree. When specifying --no-overlay, files that appear in the index and working tree, but not in <tree-ish> are removed, to make them match <tree-ish> exactly. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). <branch> Branch to checkout; if it refers to a branch (i.e., a name that, when prepended with "refs/heads/", is a valid ref), then that branch is checked out. Otherwise, if it refers to a valid commit, your HEAD becomes "detached" and you are no longer on any branch (see below for details). You can use the @{-N} syntax to refer to the N-th last branch/commit checked out using "git checkout" operation. You may also specify - which is synonymous to @{-1}. As a special case, you may use A...B as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. <new-branch> Name for the new branch. <start-point> The name of a commit at which to start the new branch; see git-branch(1) for details. Defaults to HEAD. As a special case, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. <tree-ish> Tree to checkout from (when paths are given). If not specified, the index will be used. As a special case, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. -- Do not interpret any more arguments as options. <pathspec>... Limits the paths affected by the operation. For more details, see the pathspec entry in gitglossary(7). DETACHED HEAD top HEAD normally refers to a named branch (e.g. master). Meanwhile, each branch refers to a specific commit. Lets look at a repo with three commits, one of them tagged, and with branch master checked out: HEAD (refers to branch 'master') | v a---b---c branch 'master' (refers to commit 'c') ^ | tag 'v2.0' (refers to commit 'b') When a commit is created in this state, the branch is updated to refer to the new commit. Specifically, git commit creates a new commit d, whose parent is commit c, and then updates branch master to refer to new commit d. HEAD still refers to branch master and so indirectly now refers to commit d: $ edit; git add; git commit HEAD (refers to branch 'master') | v a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') It is sometimes useful to be able to checkout a commit that is not at the tip of any named branch, or even to create a new commit that is not referenced by a named branch. Lets look at what happens when we checkout commit b (here we show two ways this may be done): $ git checkout v2.0 # or $ git checkout master^^ HEAD (refers to commit 'b') | v a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') Notice that regardless of which checkout command we use, HEAD now refers directly to commit b. This is known as being in detached HEAD state. It means simply that HEAD refers to a specific commit, as opposed to referring to a named branch. Lets see what happens when we create a commit: $ edit; git add; git commit HEAD (refers to commit 'e') | v e / a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') There is now a new commit e, but it is referenced only by HEAD. We can of course add yet another commit in this state: $ edit; git add; git commit HEAD (refers to commit 'f') | v e---f / a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') In fact, we can perform all the normal Git operations. But, lets look at what happens when we then checkout master: $ git checkout master HEAD (refers to branch 'master') e---f | / v a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') It is important to realize that at this point nothing refers to commit f. Eventually commit f (and by extension commit e) will be deleted by the routine Git garbage collection process, unless we create a reference before that happens. If we have not yet moved away from commit f, any of these will create a reference to it: $ git checkout -b foo # or "git switch -c foo" (1) $ git branch foo (2) $ git tag foo (3) 1. creates a new branch foo, which refers to commit f, and then updates HEAD to refer to branch foo. In other words, well no longer be in detached HEAD state after this command. 2. similarly creates a new branch foo, which refers to commit f, but leaves HEAD detached. 3. creates a new tag foo, which refers to commit f, leaving HEAD detached. If we have moved away from commit f, then we must first recover its object name (typically by using git reflog), and then we can create a reference to it. For example, to see the last two commits to which HEAD referred, we can use either of these commands: $ git reflog -2 HEAD # or $ git log -g -2 HEAD ARGUMENT DISAMBIGUATION top When there is only one argument given and it is not -- (e.g. git checkout abc), and when the argument is both a valid <tree-ish> (e.g. a branch abc exists) and a valid <pathspec> (e.g. a file or a directory whose name is "abc" exists), Git would usually ask you to disambiguate. Because checking out a branch is so common an operation, however, git checkout abc takes "abc" as a <tree-ish> in such a situation. Use git checkout -- <pathspec> if you want to checkout these paths out of the index. EXAMPLES top 1. Paths The following sequence checks out the master branch, reverts the Makefile to two revisions back, deletes hello.c by mistake, and gets it back from the index. $ git checkout master (1) $ git checkout master~2 Makefile (2) $ rm -f hello.c $ git checkout hello.c (3) 1. switch branch 2. take a file out of another commit 3. restore hello.c from the index If you want to check out all C source files out of the index, you can say $ git checkout -- '*.c' Note the quotes around *.c. The file hello.c will also be checked out, even though it is no longer in the working tree, because the file globbing is used to match entries in the index (not in the working tree by the shell). If you have an unfortunate branch that is named hello.c, this step would be confused as an instruction to switch to that branch. You should instead write: $ git checkout -- hello.c 2. Merge After working in the wrong branch, switching to the correct branch would be done using: $ git checkout mytopic However, your "wrong" branch and correct mytopic branch may differ in files that you have modified locally, in which case the above checkout would fail like this: $ git checkout mytopic error: You have local changes to 'frotz'; not switching branches. You can give the -m flag to the command, which would try a three-way merge: $ git checkout -m mytopic Auto-merging frotz After this three-way merge, the local modifications are not registered in your index file, so git diff would show you what changes you made since the tip of the new branch. 3. Merge conflict When a merge conflict happens during switching branches with the -m option, you would see something like this: $ git checkout -m mytopic Auto-merging frotz ERROR: Merge conflict in frotz fatal: merge program failed At this point, git diff shows the changes cleanly merged as in the previous example, as well as the changes in the conflicted files. Edit and resolve the conflict and mark it resolved with git add as usual: $ edit frotz $ git add frotz CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: checkout.defaultRemote When you run git checkout <something> or git switch <something> and only have one remote, it may implicitly fall back on checking out and tracking e.g. origin/<something>. This stops working as soon as you have more than one remote with a <something> reference. This setting allows for setting the name of a preferred remote that should always win when it comes to disambiguation. The typical use-case is to set this to origin. Currently this is used by git-switch(1) and git-checkout(1) when git checkout <something> or git switch <something> will checkout the <something> branch on another remote, and by git-worktree(1) when git worktree add refers to a remote branch. This setting might be used for other checkout-like commands or functionality in the future. checkout.guess Provides the default value for the --guess or --no-guess option in git checkout and git switch. See git-switch(1) and git-checkout(1). checkout.workers The number of parallel workers to use when updating the working tree. The default is one, i.e. sequential execution. If set to a value less than one, Git will use as many workers as the number of logical cores available. This setting and checkout.thresholdForParallelism affect all commands that perform checkout. E.g. checkout, clone, reset, sparse-checkout, etc. Note: Parallel checkout usually delivers better performance for repositories located on SSDs or over NFS. For repositories on spinning disks and/or machines with a small number of cores, the default sequential checkout often performs better. The size and compression level of a repository might also influence how well the parallel version performs. checkout.thresholdForParallelism When running parallel checkout with a small number of files, the cost of subprocess spawning and inter-process communication might outweigh the parallelization gains. This setting allows you to define the minimum number of files for which parallel checkout should be attempted. The default is 100. SEE ALSO top git-switch(1), git-restore(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CHECKOUT(1) Pages that refer to this page: git(1), git-checkout(1), git-commit(1), git-config(1), git-restore(1), git-stash(1), git-switch(1), git-worktree(1), githooks(5), gitrepository-layout(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git checkout\n\n> Checkout a branch or paths to the working tree.\n> More information: <https://git-scm.com/docs/git-checkout>.\n\n- Create and switch to a new branch:\n\n`git checkout -b {{branch_name}}`\n\n- Create and switch to a new branch based on a specific reference (branch, remote/branch, tag are examples of valid references):\n\n`git checkout -b {{branch_name}} {{reference}}`\n\n- Switch to an existing local branch:\n\n`git checkout {{branch_name}}`\n\n- Switch to the previously checked out branch:\n\n`git checkout -`\n\n- Switch to an existing remote branch:\n\n`git checkout --track {{remote_name}}/{{branch_name}}`\n\n- Discard all unstaged changes in the current directory (see `git reset` for more undo-like commands):\n\n`git checkout .`\n\n- Discard unstaged changes to a given file:\n\n`git checkout {{path/to/file}}`\n\n- Replace a file in the current directory with the version of it committed in a given branch:\n\n`git checkout {{branch_name}} -- {{path/to/file}}`\n
git-checkout-index
git-checkout-index(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-checkout-index(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | USING --TEMP OR --STAGE=ALL | EXAMPLES | GIT | COLOPHON GIT-CHECKOUT-INDEX(1) Git Manual GIT-CHECKOUT-INDEX(1) NAME top git-checkout-index - Copy files from the index to the working tree SYNOPSIS top git checkout-index [-u] [-q] [-a] [-f] [-n] [--prefix=<string>] [--stage=<number>|all] [--temp] [--ignore-skip-worktree-bits] [-z] [--stdin] [--] [<file>...] DESCRIPTION top Copies all listed files from the index to the working directory (not overwriting existing files). OPTIONS top -u, --index update stat information for the checked out entries in the index file. -q, --quiet be quiet if files exist or are not in the index -f, --force forces overwrite of existing files -a, --all checks out all files in the index except for those with the skip-worktree bit set (see --ignore-skip-worktree-bits). Cannot be used together with explicit filenames. -n, --no-create Dont checkout new files, only refresh files already checked out. --prefix=<string> When creating files, prepend <string> (usually a directory including a trailing /) --stage=<number>|all Instead of checking out unmerged entries, copy out the files from the named stage. <number> must be between 1 and 3. Note: --stage=all automatically implies --temp. --temp Instead of copying the files to the working directory, write the content to temporary files. The temporary name associations will be written to stdout. --ignore-skip-worktree-bits Check out all files, including those with the skip-worktree bit set. --stdin Instead of taking a list of paths from the command line, read the list of paths from the standard input. Paths are separated by LF (i.e. one path per line) by default. -z Only meaningful with --stdin; paths are separated with NUL character instead of LF. -- Do not interpret any more arguments as options. The order of the flags used to matter, but not anymore. Just doing git checkout-index does nothing. You probably meant git checkout-index -a. And if you want to force it, you want git checkout-index -f -a. Intuitiveness is not the goal here. Repeatability is. The reason for the "no arguments means no work" behavior is that from scripts you are supposed to be able to do: $ find . -name '*.h' -print0 | xargs -0 git checkout-index -f -- which will force all existing *.h files to be replaced with their cached copies. If an empty command line implied "all", then this would force-refresh everything in the index, which was not the point. But since git checkout-index accepts --stdin it would be faster to use: $ find . -name '*.h' -print0 | git checkout-index -f -z --stdin The -- is just a good idea when you know the rest will be filenames; it will prevent problems with a filename of, for example, -a. Using -- is probably a good policy in scripts. USING --TEMP OR --STAGE=ALL top When --temp is used (or implied by --stage=all) git checkout-index will create a temporary file for each index entry being checked out. The index will not be updated with stat information. These options can be useful if the caller needs all stages of all unmerged entries so that the unmerged files can be processed by an external merge tool. A listing will be written to stdout providing the association of temporary file names to tracked path names. The listing format has two variations: 1. tempname TAB path RS The first format is what gets used when --stage is omitted or is not --stage=all. The field tempname is the temporary file name holding the file content and path is the tracked path name in the index. Only the requested entries are output. 2. stage1temp SP stage2temp SP stage3tmp TAB path RS The second format is what gets used when --stage=all. The three stage temporary fields (stage1temp, stage2temp, stage3temp) list the name of the temporary file if there is a stage entry in the index or . if there is no stage entry. Paths which only have a stage 0 entry will always be omitted from the output. In both formats RS (the record separator) is newline by default but will be the null byte if -z was passed on the command line. The temporary file names are always safe strings; they will never contain directory separators or whitespace characters. The path field is always relative to the current directory and the temporary file names are always relative to the top level directory. If the object being copied out to a temporary file is a symbolic link the content of the link will be written to a normal file. It is up to the end-user or the Porcelain to make use of this information. EXAMPLES top To update and refresh only the files already checked out $ git checkout-index -n -f -a && git update-index --ignore-missing --refresh Using git checkout-index to "export an entire tree" The prefix ability basically makes it trivial to use git checkout-index as an "export as tree" function. Just read the desired tree into the index, and do: $ git checkout-index --prefix=git-export-dir/ -a git checkout-index will "export" the index into the specified directory. The final "/" is important. The exported name is literally just prefixed with the specified string. Contrast this with the following example. Export files with a prefix $ git checkout-index --prefix=.merged- Makefile This will check out the currently cached copy of Makefile into the file .merged-Makefile. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CHECKOUT-INDEX(1) Pages that refer to this page: git(1), git-read-tree(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git checkout-index\n\n> Copy files from the index to the working tree.\n> More information: <https://git-scm.com/docs/git-checkout-index>.\n\n- Restore any files deleted since the last commit:\n\n`git checkout-index --all`\n\n- Restore any files deleted or changed since the last commit:\n\n`git checkout-index --all --force`\n\n- Restore any files changed since the last commit, ignoring any files that were deleted:\n\n`git checkout-index --all --force --no-create`\n\n- Export a copy of the entire tree at the last commit to the specified directory (the trailing slash is important):\n\n`git checkout-index --all --force --prefix={{path/to/export_directory/}}`\n
git-cherry
git-cherry(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-cherry(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | SEE ALSO | GIT | COLOPHON GIT-CHERRY(1) Git Manual GIT-CHERRY(1) NAME top git-cherry - Find commits yet to be applied to upstream SYNOPSIS top git cherry [-v] [<upstream> [<head> [<limit>]]] DESCRIPTION top Determine whether there are commits in <head>..<upstream> that are equivalent to those in the range <limit>..<head>. The equivalence test is based on the diff, after removing whitespace and line numbers. git-cherry therefore detects when commits have been "copied" by means of git-cherry-pick(1), git-am(1) or git-rebase(1). Outputs the SHA1 of every commit in <limit>..<head>, prefixed with - for commits that have an equivalent in <upstream>, and + for commits that do not. OPTIONS top -v Show the commit subjects next to the SHA1s. <upstream> Upstream branch to search for equivalent commits. Defaults to the upstream branch of HEAD. <head> Working branch; defaults to HEAD. <limit> Do not report commits up to (and including) limit. EXAMPLES top Patch workflows git-cherry is frequently used in patch-based workflows (see gitworkflows(7)) to determine if a series of patches has been applied by the upstream maintainer. In such a workflow you might create and send a topic branch like this: $ git checkout -b topic origin/master # work and create some commits $ git format-patch origin/master $ git send-email ... 00* Later, you can see whether your changes have been applied by saying (still on topic): $ git fetch # update your notion of origin/master $ git cherry -v Concrete example In a situation where topic consisted of three commits, and the maintainer applied two of them, the situation might look like: $ git log --graph --oneline --decorate --boundary origin/master...topic * 7654321 (origin/master) upstream tip commit [... snip some other commits ...] * cccc111 cherry-pick of C * aaaa111 cherry-pick of A [... snip a lot more that has happened ...] | * cccc000 (topic) commit C | * bbbb000 commit B | * aaaa000 commit A |/ o 1234567 branch point In such cases, git-cherry shows a concise summary of what has yet to be applied: $ git cherry origin/master topic - cccc000... commit C + bbbb000... commit B - aaaa000... commit A Here, we see that the commits A and C (marked with -) can be dropped from your topic branch when you rebase it on top of origin/master, while the commit B (marked with +) still needs to be kept so that it will be sent to be applied to origin/master. Using a limit The optional <limit> is useful in cases where your topic is based on other work that is not in upstream. Expanding on the previous example, this might look like: $ git log --graph --oneline --decorate --boundary origin/master...topic * 7654321 (origin/master) upstream tip commit [... snip some other commits ...] * cccc111 cherry-pick of C * aaaa111 cherry-pick of A [... snip a lot more that has happened ...] | * cccc000 (topic) commit C | * bbbb000 commit B | * aaaa000 commit A | * 0000fff (base) unpublished stuff F [... snip ...] | * 0000aaa unpublished stuff A |/ o 1234567 merge-base between upstream and topic By specifying base as the limit, you can avoid listing commits between base and topic: $ git cherry origin/master topic base - cccc000... commit C + bbbb000... commit B - aaaa000... commit A SEE ALSO top git-patch-id(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CHERRY(1) Pages that refer to this page: git(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git cherry\n\n> Find commits that have yet to be applied upstream.\n> More information: <https://git-scm.com/docs/git-cherry>.\n\n- Show commits (and their messages) with equivalent commits upstream:\n\n`git cherry -v`\n\n- Specify a different upstream and topic branch:\n\n`git cherry {{origin}} {{topic}}`\n\n- Limit commits to those within a given limit:\n\n`git cherry {{origin}} {{topic}} {{base}}`\n
git-cherry-pick
git-cherry-pick(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-cherry-pick(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEQUENCER SUBCOMMANDS | EXAMPLES | SEE ALSO | GIT | COLOPHON GIT-CHERRY-PICK(1) Git Manual GIT-CHERRY-PICK(1) NAME top git-cherry-pick - Apply the changes introduced by some existing commits SYNOPSIS top git cherry-pick [--edit] [-n] [-m <parent-number>] [-s] [-x] [--ff] [-S[<keyid>]] <commit>... git cherry-pick (--continue | --skip | --abort | --quit) DESCRIPTION top Given one or more existing commits, apply the change each one introduces, recording a new commit for each. This requires your working tree to be clean (no modifications from the HEAD commit). When it is not obvious how to apply a change, the following happens: 1. The current branch and HEAD pointer stay at the last commit successfully made. 2. The CHERRY_PICK_HEAD ref is set to point at the commit that introduced the change that is difficult to apply. 3. Paths in which the change applied cleanly are updated both in the index file and in your working tree. 4. For conflicting paths, the index file records up to three versions, as described in the "TRUE MERGE" section of git-merge(1). The working tree files will include a description of the conflict bracketed by the usual conflict markers <<<<<<< and >>>>>>>. 5. No other modifications are made. See git-merge(1) for some hints on resolving such conflicts. OPTIONS top <commit>... Commits to cherry-pick. For a more complete list of ways to spell commits, see gitrevisions(7). Sets of commits can be passed but no traversal is done by default, as if the --no-walk option was specified, see git-rev-list(1). Note that specifying a range will feed all <commit>... arguments to a single revision walk (see a later example that uses maint master..next). -e, --edit With this option, git cherry-pick will let you edit the commit message prior to committing. --cleanup=<mode> This option determines how the commit message will be cleaned up before being passed on to the commit machinery. See git-commit(1) for more details. In particular, if the <mode> is given a value of scissors, scissors will be appended to MERGE_MSG before being passed on in the case of a conflict. -x When recording the commit, append a line that says "(cherry picked from commit ...)" to the original commit message in order to indicate which commit this change was cherry-picked from. This is done only for cherry picks without conflicts. Do not use this option if you are cherry-picking from your private branch because the information is useless to the recipient. If on the other hand you are cherry-picking between two publicly visible branches (e.g. backporting a fix to a maintenance branch for an older release from a development branch), adding this information can be useful. -r It used to be that the command defaulted to do -x described above, and -r was to disable it. Now the default is not to do -x so this option is a no-op. -m <parent-number>, --mainline <parent-number> Usually you cannot cherry-pick a merge because you do not know which side of the merge should be considered the mainline. This option specifies the parent number (starting from 1) of the mainline and allows cherry-pick to replay the change relative to the specified parent. -n, --no-commit Usually the command automatically creates a sequence of commits. This flag applies the changes necessary to cherry-pick each named commit to your working tree and the index, without making any commit. In addition, when this option is used, your index does not have to match the HEAD commit. The cherry-pick is done against the beginning state of your index. This is useful when cherry-picking more than one commits' effect to your index in a row. -s, --signoff Add a Signed-off-by trailer at the end of the commit message. See the signoff option in git-commit(1) for more information. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. --ff If the current HEAD is the same as the parent of the cherry-picked commit, then a fast forward to this commit will be performed. --allow-empty By default, cherry-picking an empty commit will fail, indicating that an explicit invocation of git commit --allow-empty is required. This option overrides that behavior, allowing empty commits to be preserved automatically in a cherry-pick. Note that when "--ff" is in effect, empty commits that meet the "fast-forward" requirement will be kept even without this option. Note also, that use of this option only keeps commits that were initially empty (i.e. the commit recorded the same tree as its parent). Commits which are made empty due to a previous commit are dropped. To force the inclusion of those commits use --keep-redundant-commits. --allow-empty-message By default, cherry-picking a commit with an empty message will fail. This option overrides that behavior, allowing commits with empty messages to be cherry picked. --keep-redundant-commits If a commit being cherry picked duplicates a commit already in the current history, it will become empty. By default these redundant commits cause cherry-pick to stop so the user can examine the commit. This option overrides that behavior and creates an empty commit object. Implies --allow-empty. --strategy=<strategy> Use the given merge strategy. Should only be used once. See the MERGE STRATEGIES section in git-merge(1) for details. -X<option>, --strategy-option=<option> Pass the merge strategy-specific option through to the merge strategy. See git-merge(1) for details. --rerere-autoupdate, --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. --no-rerere-autoupdate is a good way to double-check what rerere did and catch potential mismerges, before committing the result to the index with a separate git add. SEQUENCER SUBCOMMANDS top --continue Continue the operation in progress using the information in .git/sequencer. Can be used to continue after resolving conflicts in a failed cherry-pick or revert. --skip Skip the current commit and continue with the rest of the sequence. --quit Forget about the current operation in progress. Can be used to clear the sequencer state after a failed cherry-pick or revert. --abort Cancel the operation and return to the pre-sequence state. EXAMPLES top git cherry-pick master Apply the change introduced by the commit at the tip of the master branch and create a new commit with this change. git cherry-pick ..master, git cherry-pick ^HEAD master Apply the changes introduced by all commits that are ancestors of master but not of HEAD to produce new commits. git cherry-pick maint next ^master, git cherry-pick maint master..next Apply the changes introduced by all commits that are ancestors of maint or next, but not master or any of its ancestors. Note that the latter does not mean maint and everything between master and next; specifically, maint will not be used if it is included in master. git cherry-pick master~4 master~2 Apply the changes introduced by the fifth and third last commits pointed to by master and create 2 new commits with these changes. git cherry-pick -n master~1 next Apply to the working tree and the index the changes introduced by the second last commit pointed to by master and by the last commit pointed to by next, but do not create any commit with these changes. git cherry-pick --ff ..next If history is linear and HEAD is an ancestor of next, update the working tree and advance the HEAD pointer to match next. Otherwise, apply the changes introduced by those commits that are in next but not HEAD to the current branch, creating a new commit for each new change. git rev-list --reverse master -- README | git cherry-pick -n --stdin Apply the changes introduced by all commits on the master branch that touched README to the working tree and index, so the result can be inspected and made into a single new commit if suitable. The following sequence attempts to backport a patch, bails out because the code the patch applies to has changed too much, and then tries again, this time exercising more care about matching up context lines. $ git cherry-pick topic^ (1) $ git diff (2) $ git cherry-pick --abort (3) $ git cherry-pick -Xpatience topic^ (4) 1. apply the change that would be shown by git show topic^. In this example, the patch does not apply cleanly, so information about the conflict is written to the index and working tree and no new commit results. 2. summarize changes to be reconciled 3. cancel the cherry-pick. In other words, return to the pre-cherry-pick state, preserving any local modifications you had in the working tree. 4. try to apply the change introduced by topic^ again, spending extra time to avoid mistakes based on incorrectly matching context lines. SEE ALSO top git-revert(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CHERRY-PICK(1) Pages that refer to this page: git(1), git-cherry(1), git-revert(1), gitworkflows(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git cherry-pick\n\n> Apply the changes introduced by existing commits to the current branch.\n> To apply changes to another branch, first use `git checkout` to switch to the desired branch.\n> More information: <https://git-scm.com/docs/git-cherry-pick>.\n\n- Apply a commit to the current branch:\n\n`git cherry-pick {{commit}}`\n\n- Apply a range of commits to the current branch (see also `git rebase --onto`):\n\n`git cherry-pick {{start_commit}}~..{{end_commit}}`\n\n- Apply multiple (non-sequential) commits to the current branch:\n\n`git cherry-pick {{commit1 commit2 ...}}`\n\n- Add the changes of a commit to the working directory, without creating a commit:\n\n`git cherry-pick --no-commit {{commit}}`\n
git-clean
git-clean(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-clean(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | INTERACTIVE MODE | CONFIGURATION | SEE ALSO | GIT | COLOPHON GIT-CLEAN(1) Git Manual GIT-CLEAN(1) NAME top git-clean - Remove untracked files from the working tree SYNOPSIS top git clean [-d] [-f] [-i] [-n] [-q] [-e <pattern>] [-x | -X] [--] [<pathspec>...] DESCRIPTION top Cleans the working tree by recursively removing files that are not under version control, starting from the current directory. Normally, only files unknown to Git are removed, but if the -x option is specified, ignored files are also removed. This can, for example, be useful to remove all build products. If any optional <pathspec>... arguments are given, only those paths that match the pathspec are affected. OPTIONS top -d Normally, when no <pathspec> is specified, git clean will not recurse into untracked directories to avoid removing too much. Specify -d to have it recurse into such directories as well. If a <pathspec> is specified, -d is irrelevant; all untracked files matching the specified paths (with exceptions for nested git directories mentioned under --force) will be removed. -f, --force If the Git configuration variable clean.requireForce is not set to false, git clean will refuse to delete files or directories unless given -f or -i. Git will refuse to modify untracked nested git repositories (directories with a .git subdirectory) unless a second -f is given. -i, --interactive Show what would be done and clean files interactively. See Interactive mode for details. -n, --dry-run Dont actually remove anything, just show what would be done. -q, --quiet Be quiet, only report errors, but not the files that are successfully removed. -e <pattern>, --exclude=<pattern> Use the given exclude pattern in addition to the standard ignore rules (see gitignore(5)). -x Dont use the standard ignore rules (see gitignore(5)), but still use the ignore rules given with -e options from the command line. This allows removing all untracked files, including build products. This can be used (possibly in conjunction with git restore or git reset) to create a pristine working directory to test a clean build. -X Remove only files ignored by Git. This may be useful to rebuild everything from scratch, but keep manually created files. INTERACTIVE MODE top When the command enters the interactive mode, it shows the files and directories to be cleaned, and goes into its interactive command loop. The command loop shows the list of subcommands available, and gives a prompt "What now> ". In general, when the prompt ends with a single >, you can pick only one of the choices given and type return, like this: *** Commands *** 1: clean 2: filter by pattern 3: select by numbers 4: ask each 5: quit 6: help What now> 1 You also could say c or clean above as long as the choice is unique. The main command loop has 6 subcommands. clean Start cleaning files and directories, and then quit. filter by pattern This shows the files and directories to be deleted and issues an "Input ignore patterns>>" prompt. You can input space-separated patterns to exclude files and directories from deletion. E.g. "*.c *.h" will exclude files ending with ".c" and ".h" from deletion. When you are satisfied with the filtered result, press ENTER (empty) back to the main menu. select by numbers This shows the files and directories to be deleted and issues an "Select items to delete>>" prompt. When the prompt ends with double >> like this, you can make more than one selection, concatenated with whitespace or comma. Also you can say ranges. E.g. "2-5 7,9" to choose 2,3,4,5,7,9 from the list. If the second number in a range is omitted, all remaining items are selected. E.g. "7-" to choose 7,8,9 from the list. You can say * to choose everything. Also when you are satisfied with the filtered result, press ENTER (empty) back to the main menu. ask each This will start to clean, and you must confirm one by one in order to delete items. Please note that this action is not as efficient as the above two actions. quit This lets you quit without doing any cleaning. help Show brief usage of interactive git-clean. CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: clean.requireForce A boolean to make git-clean do nothing unless given -f, -i, or -n. Defaults to true. SEE ALSO top gitignore(5) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CLEAN(1) Pages that refer to this page: git(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git clean\n\n> Remove files not tracked by Git from the working tree.\n> More information: <https://git-scm.com/docs/git-clean>.\n\n- Delete untracked files:\n\n`git clean`\n\n- [i]nteractively delete untracked files:\n\n`git clean -i`\n\n- Show which files would be deleted without actually deleting them:\n\n`git clean --dry-run`\n\n- [f]orcefully delete untracked files:\n\n`git clean -f`\n\n- [f]orcefully delete untracked [d]irectories:\n\n`git clean -fd`\n\n- Delete untracked files, including e[x]cluded files (files ignored in `.gitignore` and `.git/info/exclude`):\n\n`git clean -x`\n
git-clone
git-clone(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-clone(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | GIT URLS | EXAMPLES | CONFIGURATION | GIT | COLOPHON GIT-CLONE(1) Git Manual GIT-CLONE(1) NAME top git-clone - Clone a repository into a new directory SYNOPSIS top git clone [--template=<template-directory>] [-l] [-s] [--no-hardlinks] [-q] [-n] [--bare] [--mirror] [-o <name>] [-b <name>] [-u <upload-pack>] [--reference <repository>] [--dissociate] [--separate-git-dir <git-dir>] [--depth <depth>] [--[no-]single-branch] [--no-tags] [--recurse-submodules[=<pathspec>]] [--[no-]shallow-submodules] [--[no-]remote-submodules] [--jobs <n>] [--sparse] [--[no-]reject-shallow] [--filter=<filter> [--also-filter-submodules]] [--] <repository> [<directory>] DESCRIPTION top Clones a repository into a newly created directory, creates remote-tracking branches for each branch in the cloned repository (visible using git branch --remotes), and creates and checks out an initial branch that is forked from the cloned repositorys currently active branch. After the clone, a plain git fetch without arguments will update all the remote-tracking branches, and a git pull without arguments will in addition merge the remote master branch into the current master branch, if any (this is untrue when "--single-branch" is given; see below). This default configuration is achieved by creating references to the remote branch heads under refs/remotes/origin and by initializing remote.origin.url and remote.origin.fetch configuration variables. OPTIONS top -l, --local When the repository to clone from is on a local machine, this flag bypasses the normal "Git aware" transport mechanism and clones the repository by making a copy of HEAD and everything under objects and refs directories. The files under .git/objects/ directory are hardlinked to save space when possible. If the repository is specified as a local path (e.g., /path/to/repo), this is the default, and --local is essentially a no-op. If the repository is specified as a URL, then this flag is ignored (and we never use the local optimizations). Specifying --no-local will override the default when /path/to/repo is given, using the regular Git transport instead. If the repositorys $GIT_DIR/objects has symbolic links or is a symbolic link, the clone will fail. This is a security measure to prevent the unintentional copying of files by dereferencing the symbolic links. NOTE: this operation can race with concurrent modification to the source repository, similar to running cp -r src dst while modifying src. --no-hardlinks Force the cloning process from a repository on a local filesystem to copy the files under the .git/objects directory instead of using hardlinks. This may be desirable if you are trying to make a back-up of your repository. -s, --shared When the repository to clone is on the local machine, instead of using hard links, automatically setup .git/objects/info/alternates to share the objects with the source repository. The resulting repository starts out without any object of its own. NOTE: this is a possibly dangerous operation; do not use it unless you understand what it does. If you clone your repository using this option and then delete branches (or use any other Git command that makes any existing commit unreferenced) in the source repository, some objects may become unreferenced (or dangling). These objects may be removed by normal Git operations (such as git commit) which automatically call git maintenance run --auto. (See git-maintenance(1).) If these objects are removed and were referenced by the cloned repository, then the cloned repository will become corrupt. Note that running git repack without the --local option in a repository cloned with --shared will copy objects from the source repository into a pack in the cloned repository, removing the disk space savings of clone --shared. It is safe, however, to run git gc, which uses the --local option by default. If you want to break the dependency of a repository cloned with --shared on its source repository, you can simply run git repack -a to copy all objects from the source repository into a pack in the cloned repository. --reference[-if-able] <repository> If the reference repository is on the local machine, automatically setup .git/objects/info/alternates to obtain objects from the reference repository. Using an already existing repository as an alternate will require fewer objects to be copied from the repository being cloned, reducing network and local storage costs. When using the --reference-if-able, a non existing directory is skipped with a warning instead of aborting the clone. NOTE: see the NOTE for the --shared option, and also the --dissociate option. --dissociate Borrow the objects from reference repositories specified with the --reference options only to reduce network transfer, and stop borrowing from them after a clone is made by making necessary local copies of borrowed objects. This option can also be used when cloning locally from a repository that already borrows objects from another repositorythe new repository will borrow objects from the same repository, and this option can be used to stop the borrowing. -q, --quiet Operate quietly. Progress is not reported to the standard error stream. -v, --verbose Run verbosely. Does not affect the reporting of progress status to the standard error stream. --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --quiet is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --server-option=<option> Transmit the given string to the server when communicating using protocol version 2. The given string must not contain a NUL or LF character. The servers handling of server options, including unknown ones, is server-specific. When multiple --server-option=<option> are given, they are all sent to the other side in the order listed on the command line. -n, --no-checkout No checkout of HEAD is performed after the clone is complete. --[no-]reject-shallow Fail if the source repository is a shallow repository. The clone.rejectShallow configuration variable can be used to specify the default. --bare Make a bare Git repository. That is, instead of creating <directory> and placing the administrative files in <directory>/.git, make the <directory> itself the $GIT_DIR. This obviously implies the --no-checkout because there is nowhere to check out the working tree. Also the branch heads at the remote are copied directly to corresponding local branch heads, without mapping them to refs/remotes/origin/. When this option is used, neither remote-tracking branches nor the related configuration variables are created. --sparse Employ a sparse-checkout, with only files in the toplevel directory initially being present. The git-sparse-checkout(1) command can be used to grow the working directory as needed. --filter=<filter-spec> Use the partial clone feature and request that the server sends a subset of reachable objects according to a given object filter. When using --filter, the supplied <filter-spec> is used for the partial clone filter. For example, --filter=blob:none will filter out all blobs (file contents) until needed by Git. Also, --filter=blob:limit=<size> will filter out all blobs of size at least <size>. For more details on filter specifications, see the --filter option in git-rev-list(1). --also-filter-submodules Also apply the partial clone filter to any submodules in the repository. Requires --filter and --recurse-submodules. This can be turned on by default by setting the clone.filterSubmodules config option. --mirror Set up a mirror of the source repository. This implies --bare. Compared to --bare, --mirror not only maps local branches of the source to local branches of the target, it maps all refs (including remote-tracking branches, notes etc.) and sets up a refspec configuration such that all these refs are overwritten by a git remote update in the target repository. -o <name>, --origin <name> Instead of using the remote name origin to keep track of the upstream repository, use <name>. Overrides clone.defaultRemoteName from the config. -b <name>, --branch <name> Instead of pointing the newly created HEAD to the branch pointed to by the cloned repositorys HEAD, point to <name> branch instead. In a non-bare repository, this is the branch that will be checked out. --branch can also take tags and detaches the HEAD at that commit in the resulting repository. -u <upload-pack>, --upload-pack <upload-pack> When given, and the repository to clone from is accessed via ssh, this specifies a non-default path for the command run on the other end. --template=<template-directory> Specify the directory from which templates will be used; (See the "TEMPLATE DIRECTORY" section of git-init(1).) -c <key>=<value>, --config <key>=<value> Set a configuration variable in the newly-created repository; this takes effect immediately after the repository is initialized, but before the remote history is fetched or any files checked out. The key is in the same format as expected by git-config(1) (e.g., core.eol=true). If multiple values are given for the same key, each value will be written to the config file. This makes it safe, for example, to add additional fetch refspecs to the origin remote. Due to limitations of the current implementation, some configuration variables do not take effect until after the initial fetch and checkout. Configuration variables known to not take effect are: remote.<name>.mirror and remote.<name>.tagOpt. Use the corresponding --mirror and --no-tags options instead. --depth <depth> Create a shallow clone with a history truncated to the specified number of commits. Implies --single-branch unless --no-single-branch is given to fetch the histories near the tips of all branches. If you want to clone submodules shallowly, also pass --shallow-submodules. --shallow-since=<date> Create a shallow clone with a history after the specified time. --shallow-exclude=<revision> Create a shallow clone with a history, excluding commits reachable from a specified remote branch or tag. This option can be specified multiple times. --[no-]single-branch Clone only the history leading to the tip of a single branch, either specified by the --branch option or the primary branch remotes HEAD points at. Further fetches into the resulting repository will only update the remote-tracking branch for the branch this option was used for the initial cloning. If the HEAD at the remote did not point at any branch when --single-branch clone was made, no remote-tracking branch is created. --no-tags Dont clone any tags, and set remote.<remote>.tagOpt=--no-tags in the config, ensuring that future git pull and git fetch operations wont follow any tags. Subsequent explicit tag fetches will still work, (see git-fetch(1)). Can be used in conjunction with --single-branch to clone and maintain a branch with no references other than a single cloned branch. This is useful e.g. to maintain minimal clones of the default branch of some repository for search indexing. --recurse-submodules[=<pathspec>] After the clone is created, initialize and clone submodules within based on the provided pathspec. If no pathspec is provided, all submodules are initialized and cloned. This option can be given multiple times for pathspecs consisting of multiple entries. The resulting clone has submodule.active set to the provided pathspec, or "." (meaning all submodules) if no pathspec is provided. Submodules are initialized and cloned using their default settings. This is equivalent to running git submodule update --init --recursive <pathspec> immediately after the clone is finished. This option is ignored if the cloned repository does not have a worktree/checkout (i.e. if any of --no-checkout/-n, --bare, or --mirror is given) --[no-]shallow-submodules All submodules which are cloned will be shallow with a depth of 1. --[no-]remote-submodules All submodules which are cloned will use the status of the submodules remote-tracking branch to update the submodule, rather than the superprojects recorded SHA-1. Equivalent to passing --remote to git submodule update. --separate-git-dir=<git-dir> Instead of placing the cloned repository where it is supposed to be, place the cloned repository at the specified directory, then make a filesystem-agnostic Git symbolic link to there. The result is Git repository can be separated from working tree. -j <n>, --jobs <n> The number of submodules fetched at the same time. Defaults to the submodule.fetchJobs option. <repository> The (possibly remote) repository to clone from. See the GIT URLS section below for more information on specifying repositories. <directory> The name of a new directory to clone into. The "humanish" part of the source repository is used if no directory is explicitly given (repo for /path/to/repo.git and foo for host.xz:foo/.git). Cloning into an existing directory is only allowed if the directory is empty. --bundle-uri=<uri> Before fetching from the remote, fetch a bundle from the given <uri> and unbundle the data into the local repository. The refs in the bundle will be stored under the hidden refs/bundle/* namespace. This option is incompatible with --depth, --shallow-since, and --shallow-exclude. GIT URLS top In general, URLs contain information about the transport protocol, the address of the remote server, and the path to the repository. Depending on the transport protocol, some of this information may be absent. Git supports ssh, git, http, and https protocols (in addition, ftp and ftps can be used for fetching, but this is inefficient and deprecated; do not use them). The native transport (i.e. git:// URL) does no authentication and should be used with caution on unsecured networks. The following syntaxes may be used with them: ssh://[user@]host.xz[:port]/path/to/repo.git/ git://host.xz[:port]/path/to/repo.git/ http[s]://host.xz[:port]/path/to/repo.git/ ftp[s]://host.xz[:port]/path/to/repo.git/ An alternative scp-like syntax may also be used with the ssh protocol: [user@]host.xz:path/to/repo.git/ This syntax is only recognized if there are no slashes before the first colon. This helps differentiate a local path that contains a colon. For example the local path foo:bar could be specified as an absolute path or ./foo:bar to avoid being misinterpreted as an ssh url. The ssh and git protocols additionally support ~username expansion: ssh://[user@]host.xz[:port]/~[user]/path/to/repo.git/ git://host.xz[:port]/~[user]/path/to/repo.git/ [user@]host.xz:/~[user]/path/to/repo.git/ For local repositories, also supported by Git natively, the following syntaxes may be used: /path/to/repo.git/ file:///path/to/repo.git/ These two syntaxes are mostly equivalent, except the former implies --local option. git clone, git fetch and git pull, but not git push, will also accept a suitable bundle file. See git-bundle(1). When Git doesnt know how to handle a certain transport protocol, it attempts to use the remote-<transport> remote helper, if one exists. To explicitly request a remote helper, the following syntax may be used: <transport>::<address> where <address> may be a path, a server and path, or an arbitrary URL-like string recognized by the specific remote helper being invoked. See gitremote-helpers(7) for details. If there are a large number of similarly-named remote repositories and you want to use a different format for them (such that the URLs you use will be rewritten into URLs that work), you can create a configuration section of the form: [url "<actual url base>"] insteadOf = <other url base> For example, with this: [url "git://git.host.xz/"] insteadOf = host.xz:/path/to/ insteadOf = work: a URL like "work:repo.git" or like "host.xz:/path/to/repo.git" will be rewritten in any context that takes a URL to be "git://git.host.xz/repo.git". If you want to rewrite URLs for push only, you can create a configuration section of the form: [url "<actual url base>"] pushInsteadOf = <other url base> For example, with this: [url "ssh://example.org/"] pushInsteadOf = git://example.org/ a URL like "git://example.org/path/to/repo.git" will be rewritten to "ssh://example.org/path/to/repo.git" for pushes, but pulls will still use the original URL. EXAMPLES top Clone from upstream: $ git clone git://git.kernel.org/pub/scm/.../linux.git my-linux $ cd my-linux $ make Make a local clone that borrows from the current directory, without checking things out: $ git clone -l -s -n . ../copy $ cd ../copy $ git show-branch Clone from upstream while borrowing from an existing local directory: $ git clone --reference /git/linux.git \ git://git.kernel.org/pub/scm/.../linux.git \ my-linux $ cd my-linux Create a bare repository to publish your changes to the public: $ git clone --bare -l /home/proj/.git /pub/scm/proj.git CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: init.templateDir Specify the directory from which templates will be copied. (See the "TEMPLATE DIRECTORY" section of git-init(1).) init.defaultBranch Allows overriding the default branch name e.g. when initializing a new repository. clone.defaultRemoteName The name of the remote to create when cloning a repository. Defaults to origin, and can be overridden by passing the --origin command-line option to git-clone(1). clone.rejectShallow Reject cloning a repository if it is a shallow one; this can be overridden by passing the --reject-shallow option on the command line. See git-clone(1) clone.filterSubmodules If a partial clone filter is provided (see --filter in git-rev-list(1)) and --recurse-submodules is used, also apply the filter to submodules. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CLONE(1) Pages that refer to this page: git(1), git-bundle(1), git-clone(1), git-config(1), git-fetch(1), git-filter-branch(1), git-p4(1), git-pull(1), git-push(1), git-submodule(1), git-worktree(1), scalar(1), gitformat-bundle(5), githooks(5), gitmodules(5), gitprotocol-v2(5), gitrepository-layout(5), giteveryday(7), gitglossary(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git clone\n\n> Clone an existing repository.\n> More information: <https://git-scm.com/docs/git-clone>.\n\n- Clone an existing repository into a new directory (the default directory is the repository name):\n\n`git clone {{remote_repository_location}} {{path/to/directory}}`\n\n- Clone an existing repository and its submodules:\n\n`git clone --recursive {{remote_repository_location}}`\n\n- Clone only the `.git` directory of an existing repository:\n\n`git clone --no-checkout {{remote_repository_location}}`\n\n- Clone a local repository:\n\n`git clone --local {{path/to/local/repository}}`\n\n- Clone quietly:\n\n`git clone --quiet {{remote_repository_location}}`\n\n- Clone an existing repository only fetching the 10 most recent commits on the default branch (useful to save time):\n\n`git clone --depth {{10}} {{remote_repository_location}}`\n\n- Clone an existing repository only fetching a specific branch:\n\n`git clone --branch {{name}} --single-branch {{remote_repository_location}}`\n\n- Clone an existing repository using a specific SSH command:\n\n`git clone --config core.sshCommand="{{ssh -i path/to/private_ssh_key}}" {{remote_repository_location}}`\n
git-column
git-column(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-column(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | CONFIGURATION | GIT | COLOPHON GIT-COLUMN(1) Git Manual GIT-COLUMN(1) NAME top git-column - Display data in columns SYNOPSIS top git column [--command=<name>] [--[raw-]mode=<mode>] [--width=<width>] [--indent=<string>] [--nl=<string>] [--padding=<n>] DESCRIPTION top This command formats the lines of its standard input into a table with multiple columns. Each input line occupies one cell of the table. It is used internally by other git commands to format output into columns. OPTIONS top --command=<name> Look up layout mode using configuration variable column.<name> and column.ui. --mode=<mode> Specify layout mode. See configuration variable column.ui for option syntax in git-config(1). --raw-mode=<n> Same as --mode but take mode encoded as a number. This is mainly used by other commands that have already parsed layout mode. --width=<width> Specify the terminal width. By default git column will detect the terminal width, or fall back to 80 if it is unable to do so. --indent=<string> String to be printed at the beginning of each line. --nl=<string> String to be printed at the end of each line, including newline character. --padding=<N> The number of spaces between columns. One space by default. EXAMPLES top Format data by columns: $ seq 1 24 | git column --mode=column --padding=5 1 4 7 10 13 16 19 22 2 5 8 11 14 17 20 23 3 6 9 12 15 18 21 24 Format data by rows: $ seq 1 21 | git column --mode=row --padding=5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 List some tags in a table with unequal column widths: $ git tag --list 'v2.4.*' --column=row,dense v2.4.0 v2.4.0-rc0 v2.4.0-rc1 v2.4.0-rc2 v2.4.0-rc3 v2.4.1 v2.4.10 v2.4.11 v2.4.12 v2.4.2 v2.4.3 v2.4.4 v2.4.5 v2.4.6 v2.4.7 v2.4.8 v2.4.9 CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: column.ui Specify whether supported commands should output in columns. This variable consists of a list of tokens separated by spaces or commas: These options control when the feature should be enabled (defaults to never): always always show in columns never never show in columns auto show in columns if the output is to the terminal These options control layout (defaults to column). Setting any of these implies always if none of always, never, or auto are specified. column fill columns before rows row fill rows before columns plain show in one column Finally, these options can be combined with a layout option (defaults to nodense): dense make unequal size columns to utilize more space nodense make equal size columns column.branch Specify whether to output branch listing in git branch in columns. See column.ui for details. column.clean Specify the layout when listing items in git clean -i, which always shows files and directories in columns. See column.ui for details. column.status Specify whether to output untracked files in git status in columns. See column.ui for details. column.tag Specify whether to output tag listings in git tag in columns. See column.ui for details. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-COLUMN(1) Pages that refer to this page: git(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git column\n\n> Display data in columns.\n> More information: <https://git-scm.com/docs/git-column>.\n\n- Format `stdin` as multiple columns:\n\n`ls | git column --mode={{column}}`\n\n- Format `stdin` as multiple columns with a maximum width of `100`:\n\n`ls | git column --mode=column --width={{100}}`\n\n- Format `stdin` as multiple columns with a maximum padding of `30`:\n\n`ls | git column --mode=column --padding={{30}}`\n
git-commit
git-commit(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-commit(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | COMMIT INFORMATION | DATE FORMATS | DISCUSSION | ENVIRONMENT AND CONFIGURATION VARIABLES | HOOKS | FILES | SEE ALSO | GIT | COLOPHON GIT-COMMIT(1) Git Manual GIT-COMMIT(1) NAME top git-commit - Record changes to the repository SYNOPSIS top git commit [-a | --interactive | --patch] [-s] [-v] [-u<mode>] [--amend] [--dry-run] [(-c | -C | --squash) <commit> | --fixup [(amend|reword):]<commit>)] [-F <file> | -m <msg>] [--reset-author] [--allow-empty] [--allow-empty-message] [--no-verify] [-e] [--author=<author>] [--date=<date>] [--cleanup=<mode>] [--[no-]status] [-i | -o] [--pathspec-from-file=<file> [--pathspec-file-nul]] [(--trailer <token>[(=|:)<value>])...] [-S[<keyid>]] [--] [<pathspec>...] DESCRIPTION top Create a new commit containing the current contents of the index and the given log message describing the changes. The new commit is a direct child of HEAD, usually the tip of the current branch, and the branch is updated to point to it (unless no branch is associated with the working tree, in which case HEAD is "detached" as described in git-checkout(1)). The content to be committed can be specified in several ways: 1. by using git-add(1) to incrementally "add" changes to the index before using the commit command (Note: even modified files must be "added"); 2. by using git-rm(1) to remove files from the working tree and the index, again before using the commit command; 3. by listing files as arguments to the commit command (without --interactive or --patch switch), in which case the commit will ignore changes staged in the index, and instead record the current content of the listed files (which must already be known to Git); 4. by using the -a switch with the commit command to automatically "add" changes from all known files (i.e. all files that are already listed in the index) and to automatically "rm" files in the index that have been removed from the working tree, and then perform the actual commit; 5. by using the --interactive or --patch switches with the commit command to decide one by one which files or hunks should be part of the commit in addition to contents in the index, before finalizing the operation. See the Interactive Mode section of git-add(1) to learn how to operate these modes. The --dry-run option can be used to obtain a summary of what is included by any of the above for the next commit by giving the same set of parameters (options and paths). If you make a commit and then find a mistake immediately after that, you can recover from it with git reset. OPTIONS top -a, --all Tell the command to automatically stage files that have been modified and deleted, but new files you have not told Git about are not affected. -p, --patch Use the interactive patch selection interface to choose which changes to commit. See git-add(1) for details. -C <commit>, --reuse-message=<commit> Take an existing commit object, and reuse the log message and the authorship information (including the timestamp) when creating the commit. -c <commit>, --reedit-message=<commit> Like -C, but with -c the editor is invoked, so that the user can further edit the commit message. --fixup=[(amend|reword):]<commit> Create a new commit which "fixes up" <commit> when applied with git rebase --autosquash. Plain --fixup=<commit> creates a "fixup!" commit which changes the content of <commit> but leaves its log message untouched. --fixup=amend:<commit> is similar but creates an "amend!" commit which also replaces the log message of <commit> with the log message of the "amend!" commit. --fixup=reword:<commit> creates an "amend!" commit which replaces the log message of <commit> with its own log message but makes no changes to the content of <commit>. The commit created by plain --fixup=<commit> has a subject composed of "fixup!" followed by the subject line from <commit>, and is recognized specially by git rebase --autosquash. The -m option may be used to supplement the log message of the created commit, but the additional commentary will be thrown away once the "fixup!" commit is squashed into <commit> by git rebase --autosquash. The commit created by --fixup=amend:<commit> is similar but its subject is instead prefixed with "amend!". The log message of <commit> is copied into the log message of the "amend!" commit and opened in an editor so it can be refined. When git rebase --autosquash squashes the "amend!" commit into <commit>, the log message of <commit> is replaced by the refined log message from the "amend!" commit. It is an error for the "amend!" commits log message to be empty unless --allow-empty-message is specified. --fixup=reword:<commit> is shorthand for --fixup=amend:<commit> --only. It creates an "amend!" commit with only a log message (ignoring any changes staged in the index). When squashed by git rebase --autosquash, it replaces the log message of <commit> without making any other changes. Neither "fixup!" nor "amend!" commits change authorship of <commit> when applied by git rebase --autosquash. See git-rebase(1) for details. --squash=<commit> Construct a commit message for use with rebase --autosquash. The commit message subject line is taken from the specified commit with a prefix of "squash! ". Can be used with additional commit message options (-m/-c/-C/-F). See git-rebase(1) for details. --reset-author When used with -C/-c/--amend options, or when committing after a conflicting cherry-pick, declare that the authorship of the resulting commit now belongs to the committer. This also renews the author timestamp. --short When doing a dry-run, give the output in the short-format. See git-status(1) for details. Implies --dry-run. --branch Show the branch and tracking info even in short-format. --porcelain When doing a dry-run, give the output in a porcelain-ready format. See git-status(1) for details. Implies --dry-run. --long When doing a dry-run, give the output in the long-format. Implies --dry-run. -z, --null When showing short or porcelain status output, print the filename verbatim and terminate the entries with NUL, instead of LF. If no format is given, implies the --porcelain output format. Without the -z option, filenames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). -F <file>, --file=<file> Take the commit message from the given file. Use - to read the message from the standard input. --author=<author> Override the commit author. Specify an explicit author using the standard A U Thor <author@example.com> format. Otherwise <author> is assumed to be a pattern and is used to search for an existing commit by that author (i.e. rev-list --all -i --author=<author>); the commit author is then copied from the first such commit found. --date=<date> Override the author date used in the commit. -m <msg>, --message=<msg> Use the given <msg> as the commit message. If multiple -m options are given, their values are concatenated as separate paragraphs. The -m option is mutually exclusive with -c, -C, and -F. -t <file>, --template=<file> When editing the commit message, start the editor with the contents in the given file. The commit.template configuration variable is often used to give this option implicitly to the command. This mechanism can be used by projects that want to guide participants with some hints on what to write in the message in what order. If the user exits the editor without editing the message, the commit is aborted. This has no effect when a message is given by other means, e.g. with the -m or -F options. -s, --signoff, --no-signoff Add a Signed-off-by trailer by the committer at the end of the commit log message. The meaning of a signoff depends on the project to which youre committing. For example, it may certify that the committer has the rights to submit the work under the projects license or agrees to some contributor representation, such as a Developer Certificate of Origin. (See https://developercertificate.org for the one used by the Linux kernel and Git projects.) Consult the documentation or leadership of the project to which youre contributing to understand how the signoffs are used in that project. The --no-signoff option can be used to countermand an earlier --signoff option on the command line. --trailer <token>[(=|:)<value>] Specify a (<token>, <value>) pair that should be applied as a trailer. (e.g. git commit --trailer "Signed-off-by:C O Mitter \ <committer@example.com>" --trailer "Helped-by:C O Mitter \ <committer@example.com>" will add the "Signed-off-by" trailer and the "Helped-by" trailer to the commit message.) The trailer.* configuration variables ( git-interpret-trailers(1)) can be used to define if a duplicated trailer is omitted, where in the run of trailers each trailer would appear, and other details. -n, --[no-]verify By default, the pre-commit and commit-msg hooks are run. When any of --no-verify or -n is given, these are bypassed. See also githooks(5). --allow-empty Usually recording a commit that has the exact same tree as its sole parent commit is a mistake, and the command prevents you from making such a commit. This option bypasses the safety, and is primarily for use by foreign SCM interface scripts. --allow-empty-message Like --allow-empty this command is primarily for use by foreign SCM interface scripts. It allows you to create a commit with an empty commit message without using plumbing commands like git-commit-tree(1). --cleanup=<mode> This option determines how the supplied commit message should be cleaned up before committing. The <mode> can be strip, whitespace, verbatim, scissors or default. strip Strip leading and trailing empty lines, trailing whitespace, commentary and collapse consecutive empty lines. whitespace Same as strip except #commentary is not removed. verbatim Do not change the message at all. scissors Same as whitespace except that everything from (and including) the line found below is truncated, if the message is to be edited. "#" can be customized with core.commentChar. # ------------------------ >8 ------------------------ default Same as strip if the message is to be edited. Otherwise whitespace. The default can be changed by the commit.cleanup configuration variable (see git-config(1)). -e, --edit The message taken from file with -F, command line with -m, and from commit object with -C are usually used as the commit log message unmodified. This option lets you further edit the message taken from these sources. --no-edit Use the selected commit message without launching an editor. For example, git commit --amend --no-edit amends a commit without changing its commit message. --amend Replace the tip of the current branch by creating a new commit. The recorded tree is prepared as usual (including the effect of the -i and -o options and explicit pathspec), and the message from the original commit is used as the starting point, instead of an empty message, when no other message is specified from the command line via options such as -m, -F, -c, etc. The new commit has the same parents and author as the current one (the --reset-author option can countermand this). It is a rough equivalent for: $ git reset --soft HEAD^ $ ... do something else to come up with the right tree ... $ git commit -c ORIG_HEAD but can be used to amend a merge commit. You should understand the implications of rewriting history if you amend a commit that has already been published. (See the "RECOVERING FROM UPSTREAM REBASE" section in git-rebase(1).) --no-post-rewrite Bypass the post-rewrite hook. -i, --include Before making a commit out of staged contents so far, stage the contents of paths given on the command line as well. This is usually not what you want unless you are concluding a conflicted merge. -o, --only Make a commit by taking the updated working tree contents of the paths specified on the command line, disregarding any contents that have been staged for other paths. This is the default mode of operation of git commit if any paths are given on the command line, in which case this option can be omitted. If this option is specified together with --amend, then no paths need to be specified, which can be used to amend the last commit without committing changes that have already been staged. If used together with --allow-empty paths are also not required, and an empty commit will be created. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -u[<mode>], --untracked-files[=<mode>] Show untracked files. The mode parameter is optional (defaults to all), and is used to specify the handling of untracked files; when -u is not used, the default is normal, i.e. show untracked files and directories. The possible options are: no - Show no untracked files normal - Shows untracked files and directories all - Also shows individual files in untracked directories. The default can be changed using the status.showUntrackedFiles configuration variable documented in git-config(1). -v, --verbose Show unified diff between the HEAD commit and what would be committed at the bottom of the commit message template to help the user describe the commit by reminding what changes the commit has. Note that this diff output doesnt have its lines prefixed with #. This diff will not be a part of the commit message. See the commit.verbose configuration variable in git-config(1). If specified twice, show in addition the unified diff between what would be committed and the worktree files, i.e. the unstaged changes to tracked files. -q, --quiet Suppress commit summary message. --dry-run Do not create a commit, but show a list of paths that are to be committed, paths with local changes that will be left uncommitted and paths that are untracked. --status Include the output of git-status(1) in the commit message template when using an editor to prepare the commit message. Defaults to on, but can be used to override configuration variable commit.status. --no-status Do not include the output of git-status(1) in the commit message template when using an editor to prepare the default commit message. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. -- Do not interpret any more arguments as options. <pathspec>... When pathspec is given on the command line, commit the contents of the files that match the pathspec without recording the changes already added to the index. The contents of these files are also staged for the next commit on top of what have been staged before. For more details, see the pathspec entry in gitglossary(7). EXAMPLES top When recording your own work, the contents of modified files in your working tree are temporarily stored to a staging area called the "index" with git add. A file can be reverted back, only in the index but not in the working tree, to that of the last commit with git restore --staged <file>, which effectively reverts git add and prevents the changes to this file from participating in the next commit. After building the state to be committed incrementally with these commands, git commit (without any pathname parameter) is used to record what has been staged so far. This is the most basic form of the command. An example: $ edit hello.c $ git rm goodbye.c $ git add hello.c $ git commit Instead of staging files after each individual change, you can tell git commit to notice the changes to the files whose contents are tracked in your working tree and do corresponding git add and git rm for you. That is, this example does the same as the earlier example if there is no other change in your working tree: $ edit hello.c $ rm goodbye.c $ git commit -a The command git commit -a first looks at your working tree, notices that you have modified hello.c and removed goodbye.c, and performs necessary git add and git rm for you. After staging changes to many files, you can alter the order the changes are recorded in, by giving pathnames to git commit. When pathnames are given, the command makes a commit that only records the changes made to the named paths: $ edit hello.c hello.h $ git add hello.c hello.h $ edit Makefile $ git commit Makefile This makes a commit that records the modification to Makefile. The changes staged for hello.c and hello.h are not included in the resulting commit. However, their changes are not lost they are still staged and merely held back. After the above sequence, if you do: $ git commit this second commit would record the changes to hello.c and hello.h as expected. After a merge (initiated by git merge or git pull) stops because of conflicts, cleanly merged paths are already staged to be committed for you, and paths that conflicted are left in unmerged state. You would have to first check which paths are conflicting with git status and after fixing them manually in your working tree, you would stage the result as usual with git add: $ git status | grep unmerged unmerged: hello.c $ edit hello.c $ git add hello.c After resolving conflicts and staging the result, git ls-files -u would stop mentioning the conflicted path. When you are done, run git commit to finally record the merge: $ git commit As with the case to record your own changes, you can use -a option to save typing. One difference is that during a merge resolution, you cannot use git commit with pathnames to alter the order the changes are committed, because the merge should be recorded as a single commit. In fact, the command refuses to run when given pathnames (but see -i option). COMMIT INFORMATION top Author and committer information is taken from the following environment variables, if set: GIT_AUTHOR_NAME GIT_AUTHOR_EMAIL GIT_AUTHOR_DATE GIT_COMMITTER_NAME GIT_COMMITTER_EMAIL GIT_COMMITTER_DATE (nb "<", ">" and "\n"s are stripped) The author and committer names are by convention some form of a personal name (that is, the name by which other humans refer to you), although Git does not enforce or require any particular form. Arbitrary Unicode may be used, subject to the constraints listed above. This name has no effect on authentication; for that, see the credential.username variable in git-config(1). In case (some of) these environment variables are not set, the information is taken from the configuration items user.name and user.email, or, if not present, the environment variable EMAIL, or, if that is not set, system user name and the hostname used for outgoing mail (taken from /etc/mailname and falling back to the fully qualified hostname when that file does not exist). The author.name and committer.name and their corresponding email options override user.name and user.email if set and are overridden themselves by the environment variables. The typical usage is to set just the user.name and user.email variables; the other options are provided for more complex use cases. DATE FORMATS top The GIT_AUTHOR_DATE and GIT_COMMITTER_DATE environment variables support the following date formats: Git internal format It is <unix-timestamp> <time-zone-offset>, where <unix-timestamp> is the number of seconds since the UNIX epoch. <time-zone-offset> is a positive or negative offset from UTC. For example CET (which is 1 hour ahead of UTC) is +0100. RFC 2822 The standard email format as described by RFC 2822, for example Thu, 07 Apr 2005 22:13:13 +0200. ISO 8601 Time and date specified by the ISO 8601 standard, for example 2005-04-07T22:13:13. The parser accepts a space instead of the T character as well. Fractional parts of a second will be ignored, for example 2005-04-07T22:13:13.019 will be treated as 2005-04-07T22:13:13. Note In addition, the date part is accepted in the following formats: YYYY.MM.DD, MM/DD/YYYY and DD.MM.YYYY. In addition to recognizing all date formats above, the --date option will also try to make sense of other, more human-centric date formats, such as relative dates like "yesterday" or "last Friday at noon". DISCUSSION top Though not required, its a good idea to begin the commit message with a single short (no more than 50 characters) line summarizing the change, followed by a blank line and then a more thorough description. The text up to the first blank line in a commit message is treated as the commit title, and that title is used throughout Git. For example, git-format-patch(1) turns a commit into email, and it uses the title on the Subject line and the rest of the commit in the body. Git is to some extent character encoding agnostic. The contents of the blob objects are uninterpreted sequences of bytes. There is no encoding translation at the core level. Path names are encoded in UTF-8 normalization form C. This applies to tree objects, the index file, ref names, as well as path names in command line arguments, environment variables and config files (.git/config (see git-config(1)), gitignore(5), gitattributes(5) and gitmodules(5)). Note that Git at the core level treats path names simply as sequences of non-NUL bytes, there are no path name encoding conversions (except on Mac and Windows). Therefore, using non-ASCII path names will mostly work even on platforms and file systems that use legacy extended ASCII encodings. However, repositories created on such systems will not work properly on UTF-8-based systems (e.g. Linux, Mac, Windows) and vice versa. Additionally, many Git-based tools simply assume path names to be UTF-8 and will fail to display other encodings correctly. Commit log messages are typically encoded in UTF-8, but other extended ASCII encodings are also supported. This includes ISO-8859-x, CP125x and many others, but not UTF-16/32, EBCDIC and CJK multi-byte encodings (GBK, Shift-JIS, Big5, EUC-x, CP9xx etc.). Although we encourage that the commit log messages are encoded in UTF-8, both the core and Git Porcelain are designed not to force UTF-8 on projects. If all participants of a particular project find it more convenient to use legacy encodings, Git does not forbid it. However, there are a few things to keep in mind. 1. git commit and git commit-tree issue a warning if the commit log message given to it does not look like a valid UTF-8 string, unless you explicitly say your project uses a legacy encoding. The way to say this is to have i18n.commitEncoding in .git/config file, like this: [i18n] commitEncoding = ISO-8859-1 Commit objects created with the above setting record the value of i18n.commitEncoding in their encoding header. This is to help other people who look at them later. Lack of this header implies that the commit log message is encoded in UTF-8. 2. git log, git show, git blame and friends look at the encoding header of a commit object, and try to re-code the log message into UTF-8 unless otherwise specified. You can specify the desired output encoding with i18n.logOutputEncoding in .git/config file, like this: [i18n] logOutputEncoding = ISO-8859-1 If you do not have this configuration variable, the value of i18n.commitEncoding is used instead. Note that we deliberately chose not to re-code the commit log message when a commit is made to force UTF-8 at the commit object level, because re-coding to UTF-8 is not necessarily a reversible operation. ENVIRONMENT AND CONFIGURATION VARIABLES top The editor used to edit the commit log message will be chosen from the GIT_EDITOR environment variable, the core.editor configuration variable, the VISUAL environment variable, or the EDITOR environment variable (in that order). See git-var(1) for details. Everything above this line in this section isnt included from the git-config(1) documentation. The content that follows is the same as whats found there: commit.cleanup This setting overrides the default of the --cleanup option in git commit. See git-commit(1) for details. Changing the default can be useful when you always want to keep lines that begin with the comment character # in your log message, in which case you would do git config commit.cleanup whitespace (note that you will have to remove the help lines that begin with # in the commit log template yourself, if you do this). commit.gpgSign A boolean to specify whether all commits should be GPG signed. Use of this option when doing operations such as rebase can result in a large number of commits being signed. It may be convenient to use an agent to avoid typing your GPG passphrase several times. commit.status A boolean to enable/disable inclusion of status information in the commit message template when using an editor to prepare the commit message. Defaults to true. commit.template Specify the pathname of a file to use as the template for new commit messages. commit.verbose A boolean or int to specify the level of verbosity with git commit. See git-commit(1). HOOKS top This command can run commit-msg, prepare-commit-msg, pre-commit, post-commit and post-rewrite hooks. See githooks(5) for more information. FILES top $GIT_DIR/COMMIT_EDITMSG This file contains the commit message of a commit in progress. If git commit exits due to an error before creating a commit, any commit message that has been provided by the user (e.g., in an editor session) will be available in this file, but will be overwritten by the next invocation of git commit. SEE ALSO top git-add(1), git-rm(1), git-mv(1), git-merge(1), git-commit-tree(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-COMMIT(1) Pages that refer to this page: git(1), git-add(1), git-am(1), git-cherry-pick(1), git-commit(1), git-commit-tree(1), git-config(1), git-format-patch(1), git-interpret-trailers(1), git-merge(1), git-notes(1), git-pull(1), git-rebase(1), git-replace(1), git-reset(1), git-revert(1), git-stash(1), git-svn(1), stg-email(1), stg-repair(1), githooks(5), giteveryday(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git commit\n\n> Commit files to the repository.\n> More information: <https://git-scm.com/docs/git-commit>.\n\n- Commit staged files to the repository with a message:\n\n`git commit --message "{{message}}"`\n\n- Commit staged files with a message read from a file:\n\n`git commit --file {{path/to/commit_message_file}}`\n\n- Auto stage all modified and deleted files and commit with a message:\n\n`git commit --all --message "{{message}}"`\n\n- Commit staged files and sign them with the specified GPG key (or the one defined in the configuration file if no argument is specified):\n\n`git commit --gpg-sign {{key_id}} --message "{{message}}"`\n\n- Update the last commit by adding the currently staged changes, changing the commit's hash:\n\n`git commit --amend`\n\n- Commit only specific (already staged) files:\n\n`git commit {{path/to/file1}} {{path/to/file2}}`\n\n- Create a commit, even if there are no staged files:\n\n`git commit --message "{{message}}" --allow-empty`\n
git-commit-graph
git-commit-graph(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-commit-graph(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | COMMANDS | EXAMPLES | CONFIGURATION | FILE FORMAT | GIT | COLOPHON GIT-COMMIT-GRAPH(1) Git Manual GIT-COMMIT-GRAPH(1) NAME top git-commit-graph - Write and verify Git commit-graph files SYNOPSIS top git commit-graph verify [--object-dir <dir>] [--shallow] [--[no-]progress] git commit-graph write [--object-dir <dir>] [--append] [--split[=<strategy>]] [--reachable | --stdin-packs | --stdin-commits] [--changed-paths] [--[no-]max-new-filters <n>] [--[no-]progress] <split options> DESCRIPTION top Manage the serialized commit-graph file. OPTIONS top --object-dir Use given directory for the location of packfiles and commit-graph file. This parameter exists to specify the location of an alternate that only has the objects directory, not a full .git directory. The commit-graph file is expected to be in the <dir>/info directory and the packfiles are expected to be in <dir>/pack. If the directory could not be made into an absolute path, or does not match any known object directory, git commit-graph ... will exit with non-zero status. --[no-]progress Turn progress on/off explicitly. If neither is specified, progress is shown if standard error is connected to a terminal. COMMANDS top write Write a commit-graph file based on the commits found in packfiles. If the config option core.commitGraph is disabled, then this command will output a warning, then return success without writing a commit-graph file. With the --stdin-packs option, generate the new commit graph by walking objects only in the specified pack-indexes. (Cannot be combined with --stdin-commits or --reachable.) With the --stdin-commits option, generate the new commit graph by walking commits starting at the commits specified in stdin as a list of OIDs in hex, one OID per line. OIDs that resolve to non-commits (either directly, or by peeling tags) are silently ignored. OIDs that are malformed, or do not exist generate an error. (Cannot be combined with --stdin-packs or --reachable.) With the --reachable option, generate the new commit graph by walking commits starting at all refs. (Cannot be combined with --stdin-commits or --stdin-packs.) With the --append option, include all commits that are present in the existing commit-graph file. With the --changed-paths option, compute and write information about the paths changed between a commit and its first parent. This operation can take a while on large repositories. It provides significant performance gains for getting history of a directory or a file with git log -- <path>. If this option is given, future commit-graph writes will automatically assume that this option was intended. Use --no-changed-paths to stop storing this data. With the --max-new-filters=<n> option, generate at most n new Bloom filters (if --changed-paths is specified). If n is -1, no limit is enforced. Only commits present in the new layer count against this limit. To retroactively compute Bloom filters over earlier layers, it is advised to use --split=replace. Overrides the commitGraph.maxNewFilters configuration. With the --split[=<strategy>] option, write the commit-graph as a chain of multiple commit-graph files stored in <dir>/info/commit-graphs. Commit-graph layers are merged based on the strategy and other splitting options. The new commits not already in the commit-graph are added in a new "tip" file. This file is merged with the existing file if the following merge conditions are met: If --split=no-merge is specified, a merge is never performed, and the remaining options are ignored. --split=replace overwrites the existing chain with a new one. A bare --split defers to the remaining options. (Note that merging a chain of commit graphs replaces the existing chain with a length-1 chain where the first and only incremental holds the entire graph). If --size-multiple=<X> is not specified, let X equal 2. If the new tip file would have N commits and the previous tip has M commits and X times N is greater than M, instead merge the two files into a single file. If --max-commits=<M> is specified with M a positive integer, and the new tip file would have more than M commits, then instead merge the new tip with the previous tip. Finally, if --expire-time=<datetime> is not specified, let datetime be the current time. After writing the split commit-graph, delete all unused commit-graph whose modified times are older than datetime. verify Read the commit-graph file and verify its contents against the object database. Used to check for corrupted data. With the --shallow option, only check the tip commit-graph file in a chain of split commit-graphs. EXAMPLES top Write a commit-graph file for the packed commits in your local .git directory. $ git commit-graph write Write a commit-graph file, extending the current commit-graph file using commits in <pack-index>. $ echo <pack-index> | git commit-graph write --stdin-packs Write a commit-graph file containing all reachable commits. $ git show-ref -s | git commit-graph write --stdin-commits Write a commit-graph file containing all commits in the current commit-graph file along with those reachable from HEAD. $ git rev-parse HEAD | git commit-graph write --stdin-commits --append CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: commitGraph.generationVersion Specifies the type of generation number version to use when writing or reading the commit-graph file. If version 1 is specified, then the corrected commit dates will not be written or read. Defaults to 2. commitGraph.maxNewFilters Specifies the default value for the --max-new-filters option of git commit-graph write (c.f., git-commit-graph(1)). commitGraph.readChangedPaths If true, then git will use the changed-path Bloom filters in the commit-graph file (if it exists, and they are present). Defaults to true. See git-commit-graph(1) for more information. FILE FORMAT top see gitformat-commit-graph(5). GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-COMMIT-GRAPH(1) Pages that refer to this page: git(1), git-commit-graph(1), git-config(1), git-fsck(1), git-gc(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git commit-graph\n\n> Write and verify Git commit-graph files.\n> More information: <https://git-scm.com/docs/git-commit-graph>.\n\n- Write a commit-graph file for the packed commits in the repository's local `.git` directory:\n\n`git commit-graph write`\n\n- Write a commit-graph file containing all reachable commits:\n\n`git show-ref --hash | git commit-graph write --stdin-commits`\n\n- Write a commit-graph file containing all commits in the current commit-graph file along with those reachable from `HEAD`:\n\n`git rev-parse {{HEAD}} | git commit-graph write --stdin-commits --append`\n
git-commit-tree
git-commit-tree(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-commit-tree(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | COMMIT INFORMATION | DATE FORMATS | DISCUSSION | FILES | SEE ALSO | GIT | COLOPHON GIT-COMMIT-TREE(1) Git Manual GIT-COMMIT-TREE(1) NAME top git-commit-tree - Create a new commit object SYNOPSIS top git commit-tree <tree> [(-p <parent>)...] git commit-tree [(-p <parent>)...] [-S[<keyid>]] [(-m <message>)...] [(-F <file>)...] <tree> DESCRIPTION top This is usually not what an end user wants to run directly. See git-commit(1) instead. Creates a new commit object based on the provided tree object and emits the new commit object id on stdout. The log message is read from the standard input, unless -m or -F options are given. The -m and -F options can be given any number of times, in any order. The commit log message will be composed in the order in which the options are given. A commit object may have any number of parents. With exactly one parent, it is an ordinary commit. Having more than one parent makes the commit a merge between several lines of history. Initial (root) commits have no parents. While a tree represents a particular directory state of a working directory, a commit represents that state in "time", and explains how to get there. Normally a commit would identify a new "HEAD" state, and while Git doesnt care where you save the note about that state, in practice we tend to just write the result to the file that is pointed at by .git/HEAD, so that we can always see what the last committed state was. OPTIONS top <tree> An existing tree object. -p <parent> Each -p indicates the id of a parent commit object. -m <message> A paragraph in the commit log message. This can be given more than once and each <message> becomes its own paragraph. -F <file> Read the commit log message from the given file. Use - to read from the standard input. This can be given more than once and the content of each file becomes its own paragraph. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand a --gpg-sign option given earlier on the command line. COMMIT INFORMATION top A commit encapsulates: all parent object ids author name, email and date committer name and email and the commit time. A commit comment is read from stdin. If a changelog entry is not provided via "<" redirection, git commit-tree will just wait for one to be entered and terminated with ^D. DATE FORMATS top The GIT_AUTHOR_DATE and GIT_COMMITTER_DATE environment variables support the following date formats: Git internal format It is <unix-timestamp> <time-zone-offset>, where <unix-timestamp> is the number of seconds since the UNIX epoch. <time-zone-offset> is a positive or negative offset from UTC. For example CET (which is 1 hour ahead of UTC) is +0100. RFC 2822 The standard email format as described by RFC 2822, for example Thu, 07 Apr 2005 22:13:13 +0200. ISO 8601 Time and date specified by the ISO 8601 standard, for example 2005-04-07T22:13:13. The parser accepts a space instead of the T character as well. Fractional parts of a second will be ignored, for example 2005-04-07T22:13:13.019 will be treated as 2005-04-07T22:13:13. Note In addition, the date part is accepted in the following formats: YYYY.MM.DD, MM/DD/YYYY and DD.MM.YYYY. DISCUSSION top Git is to some extent character encoding agnostic. The contents of the blob objects are uninterpreted sequences of bytes. There is no encoding translation at the core level. Path names are encoded in UTF-8 normalization form C. This applies to tree objects, the index file, ref names, as well as path names in command line arguments, environment variables and config files (.git/config (see git-config(1)), gitignore(5), gitattributes(5) and gitmodules(5)). Note that Git at the core level treats path names simply as sequences of non-NUL bytes, there are no path name encoding conversions (except on Mac and Windows). Therefore, using non-ASCII path names will mostly work even on platforms and file systems that use legacy extended ASCII encodings. However, repositories created on such systems will not work properly on UTF-8-based systems (e.g. Linux, Mac, Windows) and vice versa. Additionally, many Git-based tools simply assume path names to be UTF-8 and will fail to display other encodings correctly. Commit log messages are typically encoded in UTF-8, but other extended ASCII encodings are also supported. This includes ISO-8859-x, CP125x and many others, but not UTF-16/32, EBCDIC and CJK multi-byte encodings (GBK, Shift-JIS, Big5, EUC-x, CP9xx etc.). Although we encourage that the commit log messages are encoded in UTF-8, both the core and Git Porcelain are designed not to force UTF-8 on projects. If all participants of a particular project find it more convenient to use legacy encodings, Git does not forbid it. However, there are a few things to keep in mind. 1. git commit and git commit-tree issue a warning if the commit log message given to it does not look like a valid UTF-8 string, unless you explicitly say your project uses a legacy encoding. The way to say this is to have i18n.commitEncoding in .git/config file, like this: [i18n] commitEncoding = ISO-8859-1 Commit objects created with the above setting record the value of i18n.commitEncoding in their encoding header. This is to help other people who look at them later. Lack of this header implies that the commit log message is encoded in UTF-8. 2. git log, git show, git blame and friends look at the encoding header of a commit object, and try to re-code the log message into UTF-8 unless otherwise specified. You can specify the desired output encoding with i18n.logOutputEncoding in .git/config file, like this: [i18n] logOutputEncoding = ISO-8859-1 If you do not have this configuration variable, the value of i18n.commitEncoding is used instead. Note that we deliberately chose not to re-code the commit log message when a commit is made to force UTF-8 at the commit object level, because re-coding to UTF-8 is not necessarily a reversible operation. FILES top /etc/mailname SEE ALSO top git-write-tree(1) git-commit(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-COMMIT-TREE(1) Pages that refer to this page: git(1), git-commit(1), git-filter-branch(1), git-merge-tree(1), git-var(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git commit-tree\n\n> Low level utility to create commit objects.\n> See also: `git commit`.\n> More information: <https://git-scm.com/docs/git-commit-tree>.\n\n- Create a commit object with the specified message:\n\n`git commit-tree {{tree}} -m "{{message}}"`\n\n- Create a commit object reading the message from a file (use `-` for `stdin`):\n\n`git commit-tree {{tree}} -F {{path/to/file}}`\n\n- Create a GPG-signed commit object:\n\n`git commit-tree {{tree}} -m "{{message}}" --gpg-sign`\n\n- Create a commit object with the specified parent commit object:\n\n`git commit-tree {{tree}} -m "{{message}}" -p {{parent_commit_sha}}`\n
git-config
git-config(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-config(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFIGURATION | FILES | SCOPES | ENVIRONMENT | EXAMPLES | CONFIGURATION FILE | BUGS | GIT | NOTES | COLOPHON GIT-CONFIG(1) Git Manual GIT-CONFIG(1) NAME top git-config - Get and set repository or global options SYNOPSIS top git config [<file-option>] [--type=<type>] [--fixed-value] [--show-origin] [--show-scope] [-z|--null] <name> [<value> [<value-pattern>]] git config [<file-option>] [--type=<type>] --add <name> <value> git config [<file-option>] [--type=<type>] [--fixed-value] --replace-all <name> <value> [<value-pattern>] git config [<file-option>] [--type=<type>] [--show-origin] [--show-scope] [-z|--null] [--fixed-value] --get <name> [<value-pattern>] git config [<file-option>] [--type=<type>] [--show-origin] [--show-scope] [-z|--null] [--fixed-value] --get-all <name> [<value-pattern>] git config [<file-option>] [--type=<type>] [--show-origin] [--show-scope] [-z|--null] [--fixed-value] [--name-only] --get-regexp <name-regex> [<value-pattern>] git config [<file-option>] [--type=<type>] [-z|--null] --get-urlmatch <name> <URL> git config [<file-option>] [--fixed-value] --unset <name> [<value-pattern>] git config [<file-option>] [--fixed-value] --unset-all <name> [<value-pattern>] git config [<file-option>] --rename-section <old-name> <new-name> git config [<file-option>] --remove-section <name> git config [<file-option>] [--show-origin] [--show-scope] [-z|--null] [--name-only] -l | --list git config [<file-option>] --get-color <name> [<default>] git config [<file-option>] --get-colorbool <name> [<stdout-is-tty>] git config [<file-option>] -e | --edit DESCRIPTION top You can query/set/replace/unset options with this command. The name is actually the section and the key separated by a dot, and the value will be escaped. Multiple lines can be added to an option by using the --add option. If you want to update or unset an option which can occur on multiple lines, a value-pattern (which is an extended regular expression, unless the --fixed-value option is given) needs to be given. Only the existing values that match the pattern are updated or unset. If you want to handle the lines that do not match the pattern, just prepend a single exclamation mark in front (see also the section called EXAMPLES), but note that this only works when the --fixed-value option is not in use. The --type=<type> option instructs git config to ensure that incoming and outgoing values are canonicalize-able under the given <type>. If no --type=<type> is given, no canonicalization will be performed. Callers may unset an existing --type specifier with --no-type. When reading, the values are read from the system, global and repository local configuration files by default, and options --system, --global, --local, --worktree and --file <filename> can be used to tell the command to read from only that location (see the section called FILES). When writing, the new value is written to the repository local configuration file by default, and options --system, --global, --worktree, --file <filename> can be used to tell the command to write to that location (you can say --local but that is the default). This command will fail with non-zero status upon error. Some exit codes are: The section or key is invalid (ret=1), no section or name was provided (ret=2), the config file is invalid (ret=3), the config file cannot be written (ret=4), you try to unset an option which does not exist (ret=5), you try to unset/set an option for which multiple lines match (ret=5), or you try to use an invalid regexp (ret=6). On success, the command returns the exit code 0. A list of all available configuration variables can be obtained using the git help --config command. OPTIONS top --replace-all Default behavior is to replace at most one line. This replaces all lines matching the key (and optionally the value-pattern). --add Adds a new line to the option without altering any existing values. This is the same as providing ^$ as the value-pattern in --replace-all. --get Get the value for a given key (optionally filtered by a regex matching the value). Returns error code 1 if the key was not found and the last value if multiple key values were found. --get-all Like get, but returns all values for a multi-valued key. --get-regexp Like --get-all, but interprets the name as a regular expression and writes out the key names. Regular expression matching is currently case-sensitive and done against a canonicalized version of the key in which section and variable names are lowercased, but subsection names are not. --get-urlmatch <name> <URL> When given a two-part name section.key, the value for section.<URL>.key whose <URL> part matches the best to the given URL is returned (if no such key exists, the value for section.key is used as a fallback). When given just the section as name, do so for all the keys in the section and list them. Returns error code 1 if no value is found. --global For writing options: write to global ~/.gitconfig file rather than the repository .git/config, write to $XDG_CONFIG_HOME/git/config file if this file exists and the ~/.gitconfig file doesnt. For reading options: read only from global ~/.gitconfig and from $XDG_CONFIG_HOME/git/config rather than from all available files. See also the section called FILES. --system For writing options: write to system-wide $(prefix)/etc/gitconfig rather than the repository .git/config. For reading options: read only from system-wide $(prefix)/etc/gitconfig rather than from all available files. See also the section called FILES. --local For writing options: write to the repository .git/config file. This is the default behavior. For reading options: read only from the repository .git/config rather than from all available files. See also the section called FILES. --worktree Similar to --local except that $GIT_DIR/config.worktree is read from or written to if extensions.worktreeConfig is enabled. If not its the same as --local. Note that $GIT_DIR is equal to $GIT_COMMON_DIR for the main working tree, but is of the form $GIT_DIR/worktrees/<id>/ for other working trees. See git-worktree(1) to learn how to enable extensions.worktreeConfig. -f <config-file>, --file <config-file> For writing options: write to the specified file rather than the repository .git/config. For reading options: read only from the specified file rather than from all available files. See also the section called FILES. --blob <blob> Similar to --file but use the given blob instead of a file. E.g. you can use master:.gitmodules to read values from the file .gitmodules in the master branch. See "SPECIFYING REVISIONS" section in gitrevisions(7) for a more complete list of ways to spell blob names. --remove-section Remove the given section from the configuration file. --rename-section Rename the given section to a new name. --unset Remove the line matching the key from config file. --unset-all Remove all lines matching the key from config file. -l, --list List all variables set in config file, along with their values. --fixed-value When used with the value-pattern argument, treat value-pattern as an exact string instead of a regular expression. This will restrict the name/value pairs that are matched to only those where the value is exactly equal to the value-pattern. --type <type> git config will ensure that any input or output is valid under the given type constraint(s), and will canonicalize outgoing values in <type>'s canonical form. Valid <type>'s include: bool: canonicalize values as either "true" or "false". int: canonicalize values as simple decimal numbers. An optional suffix of k, m, or g will cause the value to be multiplied by 1024, 1048576, or 1073741824 upon input. bool-or-int: canonicalize according to either bool or int, as described above. path: canonicalize by expanding a leading ~ to the value of $HOME and ~user to the home directory for the specified user. This specifier has no effect when setting the value (but you can use git config section.variable ~/ from the command line to let your shell do the expansion.) expiry-date: canonicalize by converting from a fixed or relative date-string to a timestamp. This specifier has no effect when setting the value. color: When getting a value, canonicalize by converting to an ANSI color escape sequence. When setting a value, a sanity-check is performed to ensure that the given value is canonicalize-able as an ANSI color, but it is written as-is. --bool, --int, --bool-or-int, --path, --expiry-date Historical options for selecting a type specifier. Prefer instead --type (see above). --no-type Un-sets the previously set type specifier (if one was previously set). This option requests that git config not canonicalize the retrieved variable. --no-type has no effect without --type=<type> or --<type>. -z, --null For all options that output values and/or keys, always end values with the null character (instead of a newline). Use newline instead as a delimiter between key and value. This allows for secure parsing of the output without getting confused e.g. by values that contain line breaks. --name-only Output only the names of config variables for --list or --get-regexp. --show-origin Augment the output of all queried config options with the origin type (file, standard input, blob, command line) and the actual origin (config file path, ref, or blob id if applicable). --show-scope Similar to --show-origin in that it augments the output of all queried config options with the scope of that value (worktree, local, global, system, command). --get-colorbool <name> [<stdout-is-tty>] Find the color setting for <name> (e.g. color.diff) and output "true" or "false". <stdout-is-tty> should be either "true" or "false", and is taken into account when configuration says "auto". If <stdout-is-tty> is missing, then checks the standard output of the command itself, and exits with status 0 if color is to be used, or exits with status 1 otherwise. When the color setting for name is undefined, the command uses color.ui as fallback. --get-color <name> [<default>] Find the color configured for name (e.g. color.diff.new) and output it as the ANSI color escape sequence to the standard output. The optional default parameter is used instead, if there is no color configured for name. --type=color [--default=<default>] is preferred over --get-color (but note that --get-color will omit the trailing newline printed by --type=color). -e, --edit Opens an editor to modify the specified config file; either --system, --global, or repository (default). --[no-]includes Respect include.* directives in config files when looking up values. Defaults to off when a specific file is given (e.g., using --file, --global, etc) and on when searching all config files. --default <value> When using --get, and the requested variable is not found, behave as if <value> were the value assigned to the that variable. CONFIGURATION top pager.config is only respected when listing configuration, i.e., when using --list or any of the --get-* which may return multiple results. The default is to use a pager. FILES top By default, git config will read configuration options from multiple files: $(prefix)/etc/gitconfig System-wide configuration file. $XDG_CONFIG_HOME/git/config, ~/.gitconfig User-specific configuration files. When the XDG_CONFIG_HOME environment variable is not set or empty, $HOME/.config/ is used as $XDG_CONFIG_HOME. These are also called "global" configuration files. If both files exist, both files are read in the order given above. $GIT_DIR/config Repository specific configuration file. $GIT_DIR/config.worktree This is optional and is only searched when extensions.worktreeConfig is present in $GIT_DIR/config. You may also provide additional configuration parameters when running any git command by using the -c option. See git(1) for details. Options will be read from all of these files that are available. If the global or the system-wide configuration files are missing or unreadable they will be ignored. If the repository configuration file is missing or unreadable, git config will exit with a non-zero error code. An error message is produced if the file is unreadable, but not if it is missing. The files are read in the order given above, with last value found taking precedence over values read earlier. When multiple values are taken then all values of a key from all files will be used. By default, options are only written to the repository specific configuration file. Note that this also affects options like --replace-all and --unset. git config will only ever change one file at a time. You can limit which configuration sources are read from or written to by specifying the path of a file with the --file option, or by specifying a configuration scope with --system, --global, --local, or --worktree. For more, see the section called OPTIONS above. SCOPES top Each configuration source falls within a configuration scope. The scopes are: system $(prefix)/etc/gitconfig global $XDG_CONFIG_HOME/git/config ~/.gitconfig local $GIT_DIR/config worktree $GIT_DIR/config.worktree command GIT_CONFIG_{COUNT,KEY,VALUE} environment variables (see the section called ENVIRONMENT below) the -c option With the exception of command, each scope corresponds to a command line option: --system, --global, --local, --worktree. When reading options, specifying a scope will only read options from the files within that scope. When writing options, specifying a scope will write to the files within that scope (instead of the repository specific configuration file). See the section called OPTIONS above for a complete description. Most configuration options are respected regardless of the scope it is defined in, but some options are only respected in certain scopes. See the respective options documentation for the full details. Protected configuration Protected configuration refers to the system, global, and command scopes. For security reasons, certain options are only respected when they are specified in protected configuration, and ignored otherwise. Git treats these scopes as if they are controlled by the user or a trusted administrator. This is because an attacker who controls these scopes can do substantial harm without using Git, so it is assumed that the users environment protects these scopes against attackers. ENVIRONMENT top GIT_CONFIG_GLOBAL, GIT_CONFIG_SYSTEM Take the configuration from the given files instead from global or system-level configuration. See git(1) for details. GIT_CONFIG_NOSYSTEM Whether to skip reading settings from the system-wide $(prefix)/etc/gitconfig file. See git(1) for details. See also the section called FILES. GIT_CONFIG_COUNT, GIT_CONFIG_KEY_<n>, GIT_CONFIG_VALUE_<n> If GIT_CONFIG_COUNT is set to a positive number, all environment pairs GIT_CONFIG_KEY_<n> and GIT_CONFIG_VALUE_<n> up to that number will be added to the processs runtime configuration. The config pairs are zero-indexed. Any missing key or value is treated as an error. An empty GIT_CONFIG_COUNT is treated the same as GIT_CONFIG_COUNT=0, namely no pairs are processed. These environment variables will override values in configuration files, but will be overridden by any explicit options passed via git -c. This is useful for cases where you want to spawn multiple git commands with a common configuration but cannot depend on a configuration file, for example when writing scripts. GIT_CONFIG If no --file option is provided to git config, use the file given by GIT_CONFIG as if it were provided via --file. This variable has no effect on other Git commands, and is mostly for historical compatibility; there is generally no reason to use it instead of the --file option. EXAMPLES top Given a .git/config like this: # # This is the config file, and # a '#' or ';' character indicates # a comment # ; core variables [core] ; Don't trust file modes filemode = false ; Our diff algorithm [diff] external = /usr/local/bin/diff-wrapper renames = true ; Proxy settings [core] gitproxy=proxy-command for kernel.org gitproxy=default-proxy ; for all the rest ; HTTP [http] sslVerify [http "https://weak.example.com"] sslVerify = false cookieFile = /tmp/cookie.txt you can set the filemode to true with % git config core.filemode true The hypothetical proxy command entries actually have a postfix to discern what URL they apply to. Here is how to change the entry for kernel.org to "ssh". % git config core.gitproxy '"ssh" for kernel.org' 'for kernel.org$' This makes sure that only the key/value pair for kernel.org is replaced. To delete the entry for renames, do % git config --unset diff.renames If you want to delete an entry for a multivar (like core.gitproxy above), you have to provide a regex matching the value of exactly one line. To query the value for a given key, do % git config --get core.filemode or % git config core.filemode or, to query a multivar: % git config --get core.gitproxy "for kernel.org$" If you want to know all the values for a multivar, do: % git config --get-all core.gitproxy If you like to live dangerously, you can replace all core.gitproxy by a new one with % git config --replace-all core.gitproxy ssh However, if you really only want to replace the line for the default proxy, i.e. the one without a "for ..." postfix, do something like this: % git config core.gitproxy ssh '! for ' To actually match only values with an exclamation mark, you have to % git config section.key value '[!]' To add a new proxy, without altering any of the existing ones, use % git config --add core.gitproxy '"proxy-command" for example.com' An example to use customized color from the configuration in your script: #!/bin/sh WS=$(git config --get-color color.diff.whitespace "blue reverse") RESET=$(git config --get-color "" "reset") echo "${WS}your whitespace color or blue reverse${RESET}" For URLs in https://weak.example.com , http.sslVerify is set to false, while it is set to true for all others: % git config --type=bool --get-urlmatch http.sslverify https://good.example.com true % git config --type=bool --get-urlmatch http.sslverify https://weak.example.com false % git config --get-urlmatch http https://weak.example.com http.cookieFile /tmp/cookie.txt http.sslverify false CONFIGURATION FILE top The Git configuration file contains a number of variables that affect the Git commands' behavior. The files .git/config and optionally config.worktree (see the "CONFIGURATION FILE" section of git-worktree(1)) in each repository are used to store the configuration for that repository, and $HOME/.gitconfig is used to store a per-user configuration as fallback values for the .git/config file. The file /etc/gitconfig can be used to store a system-wide default configuration. The configuration variables are used by both the Git plumbing and the porcelain commands. The variables are divided into sections, wherein the fully qualified variable name of the variable itself is the last dot-separated segment and the section name is everything before the last dot. The variable names are case-insensitive, allow only alphanumeric characters and -, and must start with an alphabetic character. Some variables may appear multiple times; we say then that the variable is multivalued. Syntax The syntax is fairly flexible and permissive; whitespaces are mostly ignored. The # and ; characters begin comments to the end of line, blank lines are ignored. The file consists of sections and variables. A section begins with the name of the section in square brackets and continues until the next section begins. Section names are case-insensitive. Only alphanumeric characters, - and . are allowed in section names. Each variable must belong to some section, which means that there must be a section header before the first setting of a variable. Sections can be further divided into subsections. To begin a subsection put its name in double quotes, separated by space from the section name, in the section header, like in the example below: [section "subsection"] Subsection names are case sensitive and can contain any characters except newline and the null byte. Doublequote " and backslash can be included by escaping them as \" and \\, respectively. Backslashes preceding other characters are dropped when reading; for example, \t is read as t and \0 is read as 0. Section headers cannot span multiple lines. Variables may belong directly to a section or to a given subsection. You can have [section] if you have [section "subsection"], but you dont need to. There is also a deprecated [section.subsection] syntax. With this syntax, the subsection name is converted to lower-case and is also compared case sensitively. These subsection names follow the same restrictions as section names. All the other lines (and the remainder of the line after the section header) are recognized as setting variables, in the form name = value (or just name, which is a short-hand to say that the variable is the boolean "true"). The variable names are case-insensitive, allow only alphanumeric characters and -, and must start with an alphabetic character. A line that defines a value can be continued to the next line by ending it with a \; the backslash and the end-of-line are stripped. Leading whitespaces after name =, the remainder of the line after the first comment character # or ;, and trailing whitespaces of the line are discarded unless they are enclosed in double quotes. Internal whitespaces within the value are retained verbatim. Inside double quotes, double quote " and backslash \ characters must be escaped: use \" for " and \\ for \. The following escape sequences (beside \" and \\) are recognized: \n for newline character (NL), \t for horizontal tabulation (HT, TAB) and \b for backspace (BS). Other char escape sequences (including octal escape sequences) are invalid. Includes The include and includeIf sections allow you to include config directives from another source. These sections behave identically to each other with the exception that includeIf sections may be ignored if their condition does not evaluate to true; see "Conditional includes" below. You can include a config file from another by setting the special include.path (or includeIf.*.path) variable to the name of the file to be included. The variable takes a pathname as its value, and is subject to tilde expansion. These variables can be given multiple times. The contents of the included file are inserted immediately, as if they had been found at the location of the include directive. If the value of the variable is a relative path, the path is considered to be relative to the configuration file in which the include directive was found. See below for examples. Conditional includes You can conditionally include a config file from another by setting an includeIf.<condition>.path variable to the name of the file to be included. The condition starts with a keyword followed by a colon and some data whose format and meaning depends on the keyword. Supported keywords are: gitdir The data that follows the keyword gitdir: is used as a glob pattern. If the location of the .git directory matches the pattern, the include condition is met. The .git location may be auto-discovered, or come from $GIT_DIR environment variable. If the repository is auto-discovered via a .git file (e.g. from submodules, or a linked worktree), the .git location would be the final location where the .git directory is, not where the .git file is. The pattern can contain standard globbing wildcards and two additional ones, **/ and /**, that can match multiple path components. Please refer to gitignore(5) for details. For convenience: If the pattern starts with ~/, ~ will be substituted with the content of the environment variable HOME. If the pattern starts with ./, it is replaced with the directory containing the current config file. If the pattern does not start with either ~/, ./ or /, **/ will be automatically prepended. For example, the pattern foo/bar becomes **/foo/bar and would match /any/path/to/foo/bar. If the pattern ends with /, ** will be automatically added. For example, the pattern foo/ becomes foo/**. In other words, it matches "foo" and everything inside, recursively. gitdir/i This is the same as gitdir except that matching is done case-insensitively (e.g. on case-insensitive file systems) onbranch The data that follows the keyword onbranch: is taken to be a pattern with standard globbing wildcards and two additional ones, **/ and /**, that can match multiple path components. If we are in a worktree where the name of the branch that is currently checked out matches the pattern, the include condition is met. If the pattern ends with /, ** will be automatically added. For example, the pattern foo/ becomes foo/**. In other words, it matches all branches that begin with foo/. This is useful if your branches are organized hierarchically and you would like to apply a configuration to all the branches in that hierarchy. hasconfig:remote.*.url: The data that follows this keyword is taken to be a pattern with standard globbing wildcards and two additional ones, **/ and /**, that can match multiple components. The first time this keyword is seen, the rest of the config files will be scanned for remote URLs (without applying any values). If there exists at least one remote URL that matches this pattern, the include condition is met. Files included by this option (directly or indirectly) are not allowed to contain remote URLs. Note that unlike other includeIf conditions, resolving this condition relies on information that is not yet known at the point of reading the condition. A typical use case is this option being present as a system-level or global-level config, and the remote URL being in a local-level config; hence the need to scan ahead when resolving this condition. In order to avoid the chicken-and-egg problem in which potentially-included files can affect whether such files are potentially included, Git breaks the cycle by prohibiting these files from affecting the resolution of these conditions (thus, prohibiting them from declaring remote URLs). As for the naming of this keyword, it is for forwards compatibility with a naming scheme that supports more variable-based include conditions, but currently Git only supports the exact keyword described above. A few more notes on matching via gitdir and gitdir/i: Symlinks in $GIT_DIR are not resolved before matching. Both the symlink & realpath versions of paths will be matched outside of $GIT_DIR. E.g. if ~/git is a symlink to /mnt/storage/git, both gitdir:~/git and gitdir:/mnt/storage/git will match. This was not the case in the initial release of this feature in v2.13.0, which only matched the realpath version. Configuration that wants to be compatible with the initial release of this feature needs to either specify only the realpath version, or both versions. Note that "../" is not special and will match literally, which is unlikely what you want. Example # Core variables [core] ; Don't trust file modes filemode = false # Our diff algorithm [diff] external = /usr/local/bin/diff-wrapper renames = true [branch "devel"] remote = origin merge = refs/heads/devel # Proxy settings [core] gitProxy="ssh" for "kernel.org" gitProxy=default-proxy ; for the rest [include] path = /path/to/foo.inc ; include by absolute path path = foo.inc ; find "foo.inc" relative to the current file path = ~/foo.inc ; find "foo.inc" in your `$HOME` directory ; include if $GIT_DIR is /path/to/foo/.git [includeIf "gitdir:/path/to/foo/.git"] path = /path/to/foo.inc ; include for all repositories inside /path/to/group [includeIf "gitdir:/path/to/group/"] path = /path/to/foo.inc ; include for all repositories inside $HOME/to/group [includeIf "gitdir:~/to/group/"] path = /path/to/foo.inc ; relative paths are always relative to the including ; file (if the condition is true); their location is not ; affected by the condition [includeIf "gitdir:/path/to/group/"] path = foo.inc ; include only if we are in a worktree where foo-branch is ; currently checked out [includeIf "onbranch:foo-branch"] path = foo.inc ; include only if a remote with the given URL exists (note ; that such a URL may be provided later in a file or in a ; file read after this file is read, as seen in this example) [includeIf "hasconfig:remote.*.url:https://example.com/**"] path = foo.inc [remote "origin"] url = https://example.com/git Values Values of many variables are treated as a simple string, but there are variables that take values of specific types and there are rules as to how to spell them. boolean When a variable is said to take a boolean value, many synonyms are accepted for true and false; these are all case-insensitive. true Boolean true literals are yes, on, true, and 1. Also, a variable defined without = <value> is taken as true. false Boolean false literals are no, off, false, 0 and the empty string. When converting a value to its canonical form using the --type=bool type specifier, git config will ensure that the output is "true" or "false" (spelled in lowercase). integer The value for many variables that specify various sizes can be suffixed with k, M,... to mean "scale the number by 1024", "by 1024x1024", etc. color The value for a variable that takes a color is a list of colors (at most two, one for foreground and one for background) and attributes (as many as you want), separated by spaces. The basic colors accepted are normal, black, red, green, yellow, blue, magenta, cyan, white and default. The first color given is the foreground; the second is the background. All the basic colors except normal and default have a bright variant that can be specified by prefixing the color with bright, like brightred. The color normal makes no change to the color. It is the same as an empty string, but can be used as the foreground color when specifying a background color alone (for example, "normal red"). The color default explicitly resets the color to the terminal default, for example to specify a cleared background. Although it varies between terminals, this is usually not the same as setting to "white black". Colors may also be given as numbers between 0 and 255; these use ANSI 256-color mode (but note that not all terminals may support this). If your terminal supports it, you may also specify 24-bit RGB values as hex, like #ff0ab3. The accepted attributes are bold, dim, ul, blink, reverse, italic, and strike (for crossed-out or "strikethrough" letters). The position of any attributes with respect to the colors (before, after, or in between), doesnt matter. Specific attributes may be turned off by prefixing them with no or no- (e.g., noreverse, no-ul, etc). The pseudo-attribute reset resets all colors and attributes before applying the specified coloring. For example, reset green will result in a green foreground and default background without any active attributes. An empty color string produces no color effect at all. This can be used to avoid coloring specific elements without disabling color entirely. For gits pre-defined color slots, the attributes are meant to be reset at the beginning of each item in the colored output. So setting color.decorate.branch to black will paint that branch name in a plain black, even if the previous thing on the same output line (e.g. opening parenthesis before the list of branch names in log --decorate output) is set to be painted with bold or some other attribute. However, custom log formats may do more complicated and layered coloring, and the negated forms may be useful there. pathname A variable that takes a pathname value can be given a string that begins with "~/" or "~user/", and the usual tilde expansion happens to such a string: ~/ is expanded to the value of $HOME, and ~user/ to the specified users home directory. If a path starts with %(prefix)/, the remainder is interpreted as a path relative to Gits "runtime prefix", i.e. relative to the location where Git itself was installed. For example, %(prefix)/bin/ refers to the directory in which the Git executable itself lives. If Git was compiled without runtime prefix support, the compiled-in prefix will be substituted instead. In the unlikely event that a literal path needs to be specified that should not be expanded, it needs to be prefixed by ./, like so: ./%(prefix)/bin. Variables Note that this list is non-comprehensive and not necessarily complete. For command-specific variables, you will find a more detailed description in the appropriate manual page. Other git-related tools may and do use their own variables. When inventing new variables for use in your own tool, make sure their names do not conflict with those that are used by Git itself and other popular tools, and describe them in your documentation. advice.* These variables control various optional help messages designed to aid new users. All advice.* variables default to true, and you can tell Git that you do not need help by setting these to false: ambiguousFetchRefspec Advice shown when a fetch refspec for multiple remotes maps to the same remote-tracking branch namespace and causes branch tracking set-up to fail. fetchShowForcedUpdates Advice shown when git-fetch(1) takes a long time to calculate forced updates after ref updates, or to warn that the check is disabled. pushUpdateRejected Set this variable to false if you want to disable pushNonFFCurrent, pushNonFFMatching, pushAlreadyExists, pushFetchFirst, pushNeedsForce, and pushRefNeedsUpdate simultaneously. pushNonFFCurrent Advice shown when git-push(1) fails due to a non-fast-forward update to the current branch. pushNonFFMatching Advice shown when you ran git-push(1) and pushed matching refs explicitly (i.e. you used :, or specified a refspec that isnt your current branch) and it resulted in a non-fast-forward error. pushAlreadyExists Shown when git-push(1) rejects an update that does not qualify for fast-forwarding (e.g., a tag.) pushFetchFirst Shown when git-push(1) rejects an update that tries to overwrite a remote ref that points at an object we do not have. pushNeedsForce Shown when git-push(1) rejects an update that tries to overwrite a remote ref that points at an object that is not a commit-ish, or make the remote ref point at an object that is not a commit-ish. pushUnqualifiedRefname Shown when git-push(1) gives up trying to guess based on the source and destination refs what remote ref namespace the source belongs in, but where we can still suggest that the user push to either refs/heads/* or refs/tags/* based on the type of the source object. pushRefNeedsUpdate Shown when git-push(1) rejects a forced update of a branch when its remote-tracking ref has updates that we do not have locally. skippedCherryPicks Shown when git-rebase(1) skips a commit that has already been cherry-picked onto the upstream branch. statusAheadBehind Shown when git-status(1) computes the ahead/behind counts for a local ref compared to its remote tracking ref, and that calculation takes longer than expected. Will not appear if status.aheadBehind is false or the option --no-ahead-behind is given. statusHints Show directions on how to proceed from the current state in the output of git-status(1), in the template shown when writing commit messages in git-commit(1), and in the help message shown by git-switch(1) or git-checkout(1) when switching branches. statusUoption Advise to consider using the -u option to git-status(1) when the command takes more than 2 seconds to enumerate untracked files. commitBeforeMerge Advice shown when git-merge(1) refuses to merge to avoid overwriting local changes. resetNoRefresh Advice to consider using the --no-refresh option to git-reset(1) when the command takes more than 2 seconds to refresh the index after reset. resolveConflict Advice shown by various commands when conflicts prevent the operation from being performed. sequencerInUse Advice shown when a sequencer command is already in progress. implicitIdentity Advice on how to set your identity configuration when your information is guessed from the system username and domain name. detachedHead Advice shown when you used git-switch(1) or git-checkout(1) to move to the detached HEAD state, to instruct how to create a local branch after the fact. suggestDetachingHead Advice shown when git-switch(1) refuses to detach HEAD without the explicit --detach option. checkoutAmbiguousRemoteBranchName Advice shown when the argument to git-checkout(1) and git-switch(1) ambiguously resolves to a remote tracking branch on more than one remote in situations where an unambiguous argument would have otherwise caused a remote-tracking branch to be checked out. See the checkout.defaultRemote configuration variable for how to set a given remote to be used by default in some situations where this advice would be printed. amWorkDir Advice that shows the location of the patch file when git-am(1) fails to apply it. rmHints In case of failure in the output of git-rm(1), show directions on how to proceed from the current state. addEmbeddedRepo Advice on what to do when youve accidentally added one git repo inside of another. ignoredHook Advice shown if a hook is ignored because the hook is not set as executable. waitingForEditor Print a message to the terminal whenever Git is waiting for editor input from the user. nestedTag Advice shown if a user attempts to recursively tag a tag object. submoduleAlternateErrorStrategyDie Advice shown when a submodule.alternateErrorStrategy option configured to "die" causes a fatal error. submodulesNotUpdated Advice shown when a user runs a submodule command that fails because git submodule update --init was not run. addIgnoredFile Advice shown if a user attempts to add an ignored file to the index. addEmptyPathspec Advice shown if a user runs the add command without providing the pathspec parameter. updateSparsePath Advice shown when either git-add(1) or git-rm(1) is asked to update index entries outside the current sparse checkout. diverging Advice shown when a fast-forward is not possible. worktreeAddOrphan Advice shown when a user tries to create a worktree from an invalid reference, to instruct how to create a new orphan branch instead. attr.tree A reference to a tree in the repository from which to read attributes, instead of the .gitattributes file in the working tree. In a bare repository, this defaults to HEAD:.gitattributes. If the value does not resolve to a valid tree object, an empty tree is used instead. When the GIT_ATTR_SOURCE environment variable or --attr-source command line option are used, this configuration variable has no effect. core.fileMode Tells Git if the executable bit of files in the working tree is to be honored. Some filesystems lose the executable bit when a file that is marked as executable is checked out, or checks out a non-executable file with executable bit on. git-clone(1) or git-init(1) probe the filesystem to see if it handles the executable bit correctly and this variable is automatically set as necessary. A repository, however, may be on a filesystem that handles the filemode correctly, and this variable is set to true when created, but later may be made accessible from another environment that loses the filemode (e.g. exporting ext4 via CIFS mount, visiting a Cygwin created repository with Git for Windows or Eclipse). In such a case it may be necessary to set this variable to false. See git-update-index(1). The default is true (when core.filemode is not specified in the config file). core.hideDotFiles (Windows-only) If true, mark newly-created directories and files whose name starts with a dot as hidden. If dotGitOnly, only the .git/ directory is hidden, but no other files starting with a dot. The default mode is dotGitOnly. core.ignoreCase Internal variable which enables various workarounds to enable Git to work better on filesystems that are not case sensitive, like APFS, HFS+, FAT, NTFS, etc. For example, if a directory listing finds "makefile" when Git expects "Makefile", Git will assume it is really the same file, and continue to remember it as "Makefile". The default is false, except git-clone(1) or git-init(1) will probe and set core.ignoreCase true if appropriate when the repository is created. Git relies on the proper configuration of this variable for your operating and file system. Modifying this value may result in unexpected behavior. core.precomposeUnicode This option is only used by Mac OS implementation of Git. When core.precomposeUnicode=true, Git reverts the unicode decomposition of filenames done by Mac OS. This is useful when sharing a repository between Mac OS and Linux or Windows. (Git for Windows 1.7.10 or higher is needed, or Git under cygwin 1.7). When false, file names are handled fully transparent by Git, which is backward compatible with older versions of Git. core.protectHFS If set to true, do not allow checkout of paths that would be considered equivalent to .git on an HFS+ filesystem. Defaults to true on Mac OS, and false elsewhere. core.protectNTFS If set to true, do not allow checkout of paths that would cause problems with the NTFS filesystem, e.g. conflict with 8.3 "short" names. Defaults to true on Windows, and false elsewhere. core.fsmonitor If set to true, enable the built-in file system monitor daemon for this working directory (git-fsmonitor--daemon(1)). Like hook-based file system monitors, the built-in file system monitor can speed up Git commands that need to refresh the Git index (e.g. git status) in a working directory with many files. The built-in monitor eliminates the need to install and maintain an external third-party tool. The built-in file system monitor is currently available only on a limited set of supported platforms. Currently, this includes Windows and MacOS. Otherwise, this variable contains the pathname of the "fsmonitor" hook command. This hook command is used to identify all files that may have changed since the requested date/time. This information is used to speed up git by avoiding unnecessary scanning of files that have not changed. See the "fsmonitor-watchman" section of githooks(5). Note that if you concurrently use multiple versions of Git, such as one version on the command line and another version in an IDE tool, that the definition of core.fsmonitor was extended to allow boolean values in addition to hook pathnames. Git versions 2.35.1 and prior will not understand the boolean values and will consider the "true" or "false" values as hook pathnames to be invoked. Git versions 2.26 thru 2.35.1 default to hook protocol V2 and will fall back to no fsmonitor (full scan). Git versions prior to 2.26 default to hook protocol V1 and will silently assume there were no changes to report (no scan), so status commands may report incomplete results. For this reason, it is best to upgrade all of your Git versions before using the built-in file system monitor. core.fsmonitorHookVersion Sets the protocol version to be used when invoking the "fsmonitor" hook. There are currently versions 1 and 2. When this is not set, version 2 will be tried first and if it fails then version 1 will be tried. Version 1 uses a timestamp as input to determine which files have changes since that time but some monitors like Watchman have race conditions when used with a timestamp. Version 2 uses an opaque string so that the monitor can return something that can be used to determine what files have changed without race conditions. core.trustctime If false, the ctime differences between the index and the working tree are ignored; useful when the inode change time is regularly modified by something outside Git (file system crawlers and some backup systems). See git-update-index(1). True by default. core.splitIndex If true, the split-index feature of the index will be used. See git-update-index(1). False by default. core.untrackedCache Determines what to do about the untracked cache feature of the index. It will be kept, if this variable is unset or set to keep. It will automatically be added if set to true. And it will automatically be removed, if set to false. Before setting it to true, you should check that mtime is working properly on your system. See git-update-index(1). keep by default, unless feature.manyFiles is enabled which sets this setting to true by default. core.checkStat When missing or is set to default, many fields in the stat structure are checked to detect if a file has been modified since Git looked at it. When this configuration variable is set to minimal, sub-second part of mtime and ctime, the uid and gid of the owner of the file, the inode number (and the device number, if Git was compiled to use it), are excluded from the check among these fields, leaving only the whole-second part of mtime (and ctime, if core.trustCtime is set) and the filesize to be checked. There are implementations of Git that do not leave usable values in some fields (e.g. JGit); by excluding these fields from the comparison, the minimal mode may help interoperability when the same repository is used by these other systems at the same time. core.quotePath Commands that output paths (e.g. ls-files, diff), will quote "unusual" characters in the pathname by enclosing the pathname in double-quotes and escaping those characters with backslashes in the same way C escapes control characters (e.g. \t for TAB, \n for LF, \\ for backslash) or bytes with values larger than 0x80 (e.g. octal \302\265 for "micro" in UTF-8). If this variable is set to false, bytes higher than 0x80 are not considered "unusual" any more. Double-quotes, backslash and control characters are always escaped regardless of the setting of this variable. A simple space character is not considered "unusual". Many commands can output pathnames completely verbatim using the -z option. The default value is true. core.eol Sets the line ending type to use in the working directory for files that are marked as text (either by having the text attribute set, or by having text=auto and Git auto-detecting the contents as text). Alternatives are lf, crlf and native, which uses the platforms native line ending. The default value is native. See gitattributes(5) for more information on end-of-line conversion. Note that this value is ignored if core.autocrlf is set to true or input. core.safecrlf If true, makes Git check if converting CRLF is reversible when end-of-line conversion is active. Git will verify if a command modifies a file in the work tree either directly or indirectly. For example, committing a file followed by checking out the same file should yield the original file in the work tree. If this is not the case for the current setting of core.autocrlf, Git will reject the file. The variable can be set to "warn", in which case Git will only warn about an irreversible conversion but continue the operation. CRLF conversion bears a slight chance of corrupting data. When it is enabled, Git will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by Git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell Git that this file is binary and Git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. Note, this safety check does not mean that a checkout will generate a file identical to the original file for a different setting of core.eol and core.autocrlf, but only for the current one. For example, a text file with LF would be accepted with core.eol=lf and could later be checked out with core.eol=crlf, in which case the resulting file would contain CRLF, although the original file contained LF. However, in both work trees the line endings would be consistent, that is either all LF or all CRLF, but never mixed. A file with mixed line endings would be reported by the core.safecrlf mechanism. core.autocrlf Setting this variable to "true" is the same as setting the text attribute to "auto" on all files and core.eol to "crlf". Set to true if you want to have CRLF line endings in your working directory and the repository has LF line endings. This variable can be set to input, in which case no output conversion is performed. core.checkRoundtripEncoding A comma and/or whitespace separated list of encodings that Git performs UTF-8 round trip checks on if they are used in an working-tree-encoding attribute (see gitattributes(5)). The default value is SHIFT-JIS. core.symlinks If false, symbolic links are checked out as small plain files that contain the link text. git-update-index(1) and git-add(1) will not change the recorded type to regular file. Useful on filesystems like FAT that do not support symbolic links. The default is true, except git-clone(1) or git-init(1) will probe and set core.symlinks false if appropriate when the repository is created. core.gitProxy A "proxy command" to execute (as command host port) instead of establishing direct connection to the remote server when using the Git protocol for fetching. If the variable value is in the "COMMAND for DOMAIN" format, the command is applied only on hostnames ending with the specified domain string. This variable may be set multiple times and is matched in the given order; the first match wins. Can be overridden by the GIT_PROXY_COMMAND environment variable (which always applies universally, without the special "for" handling). The special string none can be used as the proxy command to specify that no proxy be used for a given domain pattern. This is useful for excluding servers inside a firewall from proxy use, while defaulting to a common proxy for external domains. core.sshCommand If this variable is set, git fetch and git push will use the specified command instead of ssh when they need to connect to a remote system. The command is in the same form as the GIT_SSH_COMMAND environment variable and is overridden when the environment variable is set. core.ignoreStat If true, Git will avoid using lstat() calls to detect if files have changed by setting the "assume-unchanged" bit for those tracked files which it has updated identically in both the index and working tree. When files are modified outside of Git, the user will need to stage the modified files explicitly (e.g. see Examples section in git-update-index(1)). Git will not normally detect changes to those files. This is useful on systems where lstat() calls are very slow, such as CIFS/Microsoft Windows. False by default. core.preferSymlinkRefs Instead of the default "symref" format for HEAD and other symbolic reference files, use symbolic links. This is sometimes needed to work with old scripts that expect HEAD to be a symbolic link. core.alternateRefsCommand When advertising tips of available history from an alternate, use the shell to execute the specified command instead of git-for-each-ref(1). The first argument is the absolute path of the alternate. Output must contain one hex object id per line (i.e., the same as produced by git for-each-ref --format='%(objectname)'). Note that you cannot generally put git for-each-ref directly into the config value, as it does not take a repository path as an argument (but you can wrap the command above in a shell script). core.alternateRefsPrefixes When listing references from an alternate, list only references that begin with the given prefix. Prefixes match as if they were given as arguments to git-for-each-ref(1). To list multiple prefixes, separate them with whitespace. If core.alternateRefsCommand is set, setting core.alternateRefsPrefixes has no effect. core.bare If true this repository is assumed to be bare and has no working directory associated with it. If this is the case a number of commands that require a working directory will be disabled, such as git-add(1) or git-merge(1). This setting is automatically guessed by git-clone(1) or git-init(1) when the repository was created. By default a repository that ends in "/.git" is assumed to be not bare (bare = false), while all other repositories are assumed to be bare (bare = true). core.worktree Set the path to the root of the working tree. If GIT_COMMON_DIR environment variable is set, core.worktree is ignored and not used for determining the root of working tree. This can be overridden by the GIT_WORK_TREE environment variable and the --work-tree command-line option. The value can be an absolute path or relative to the path to the .git directory, which is either specified by --git-dir or GIT_DIR, or automatically discovered. If --git-dir or GIT_DIR is specified but none of --work-tree, GIT_WORK_TREE and core.worktree is specified, the current working directory is regarded as the top level of your working tree. Note that this variable is honored even when set in a configuration file in a ".git" subdirectory of a directory and its value differs from the latter directory (e.g. "/path/to/.git/config" has core.worktree set to "/different/path"), which is most likely a misconfiguration. Running Git commands in the "/path/to" directory will still use "/different/path" as the root of the work tree and can cause confusion unless you know what you are doing (e.g. you are creating a read-only snapshot of the same index to a location different from the repositorys usual working tree). core.logAllRefUpdates Enable the reflog. Updates to a ref <ref> is logged to the file "$GIT_DIR/logs/<ref>", by appending the new and old SHA-1, the date/time and the reason of the update, but only when the file exists. If this configuration variable is set to true, missing "$GIT_DIR/logs/<ref>" file is automatically created for branch heads (i.e. under refs/heads/), remote refs (i.e. under refs/remotes/), note refs (i.e. under refs/notes/), and the symbolic ref HEAD. If it is set to always, then a missing reflog is automatically created for any ref under refs/. This information can be used to determine what commit was the tip of a branch "2 days ago". This value is true by default in a repository that has a working directory associated with it, and false by default in a bare repository. core.repositoryFormatVersion Internal variable identifying the repository format and layout version. core.sharedRepository When group (or true), the repository is made shareable between several users in a group (making sure all the files and objects are group-writable). When all (or world or everybody), the repository will be readable by all users, additionally to being group-shareable. When umask (or false), Git will use permissions reported by umask(2). When 0xxx, where 0xxx is an octal number, files in the repository will have this mode value. 0xxx will override users umask value (whereas the other options will only override requested parts of the users umask value). Examples: 0660 will make the repo read/write-able for the owner and group, but inaccessible to others (equivalent to group unless umask is e.g. 0022). 0640 is a repository that is group-readable but not group-writable. See git-init(1). False by default. core.warnAmbiguousRefs If true, Git will warn you if the ref name you passed it is ambiguous and might match multiple refs in the repository. True by default. core.compression An integer -1..9, indicating a default compression level. -1 is the zlib default. 0 means no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest. If set, this provides a default to other compression variables, such as core.looseCompression and pack.compression. core.looseCompression An integer -1..9, indicating the compression level for objects that are not in a pack file. -1 is the zlib default. 0 means no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest. If not set, defaults to core.compression. If that is not set, defaults to 1 (best speed). core.packedGitWindowSize Number of bytes of a pack file to map into memory in a single mapping operation. Larger window sizes may allow your system to process a smaller number of large pack files more quickly. Smaller window sizes will negatively affect performance due to increased calls to the operating systems memory manager, but may improve performance when accessing a large number of large pack files. Default is 1 MiB if NO_MMAP was set at compile time, otherwise 32 MiB on 32 bit platforms and 1 GiB on 64 bit platforms. This should be reasonable for all users/operating systems. You probably do not need to adjust this value. Common unit suffixes of k, m, or g are supported. core.packedGitLimit Maximum number of bytes to map simultaneously into memory from pack files. If Git needs to access more than this many bytes at once to complete an operation it will unmap existing regions to reclaim virtual address space within the process. Default is 256 MiB on 32 bit platforms and 32 TiB (effectively unlimited) on 64 bit platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value. Common unit suffixes of k, m, or g are supported. core.deltaBaseCacheLimit Maximum number of bytes per thread to reserve for caching base objects that may be referenced by multiple deltified objects. By storing the entire decompressed base objects in a cache Git is able to avoid unpacking and decompressing frequently used base objects multiple times. Default is 96 MiB on all platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value. Common unit suffixes of k, m, or g are supported. core.bigFileThreshold The size of files considered "big", which as discussed below changes the behavior of numerous git commands, as well as how such files are stored within the repository. The default is 512 MiB. Common unit suffixes of k, m, or g are supported. Files above the configured limit will be: Stored deflated in packfiles, without attempting delta compression. The default limit is primarily set with this use-case in mind. With it, most projects will have their source code and other text files delta compressed, but not larger binary media files. Storing large files without delta compression avoids excessive memory usage, at the slight expense of increased disk usage. Will be treated as if they were labeled "binary" (see gitattributes(5)). e.g. git-log(1) and git-diff(1) will not compute diffs for files above this limit. Will generally be streamed when written, which avoids excessive memory usage, at the cost of some fixed overhead. Commands that make use of this include git-archive(1), git-fast-import(1), git-index-pack(1), git-unpack-objects(1) and git-fsck(1). core.excludesFile Specifies the pathname to the file that contains patterns to describe paths that are not meant to be tracked, in addition to .gitignore (per-directory) and .git/info/exclude. Defaults to $XDG_CONFIG_HOME/git/ignore. If $XDG_CONFIG_HOME is either not set or empty, $HOME/.config/git/ignore is used instead. See gitignore(5). core.askPass Some commands (e.g. svn and http interfaces) that interactively ask for a password can be told to use an external program given via the value of this variable. Can be overridden by the GIT_ASKPASS environment variable. If not set, fall back to the value of the SSH_ASKPASS environment variable or, failing that, a simple password prompt. The external program shall be given a suitable prompt as command-line argument and write the password on its STDOUT. core.attributesFile In addition to .gitattributes (per-directory) and .git/info/attributes, Git looks into this file for attributes (see gitattributes(5)). Path expansions are made the same way as for core.excludesFile. Its default value is $XDG_CONFIG_HOME/git/attributes. If $XDG_CONFIG_HOME is either not set or empty, $HOME/.config/git/attributes is used instead. core.hooksPath By default Git will look for your hooks in the $GIT_DIR/hooks directory. Set this to different path, e.g. /etc/git/hooks, and Git will try to find your hooks in that directory, e.g. /etc/git/hooks/pre-receive instead of in $GIT_DIR/hooks/pre-receive. The path can be either absolute or relative. A relative path is taken as relative to the directory where the hooks are run (see the "DESCRIPTION" section of githooks(5)). This configuration variable is useful in cases where youd like to centrally configure your Git hooks instead of configuring them on a per-repository basis, or as a more flexible and centralized alternative to having an init.templateDir where youve changed default hooks. core.editor Commands such as commit and tag that let you edit messages by launching an editor use the value of this variable when it is set, and the environment variable GIT_EDITOR is not set. See git-var(1). core.commentChar Commands such as commit and tag that let you edit messages consider a line that begins with this character commented, and removes them after the editor returns (default #). If set to "auto", git-commit would select a character that is not the beginning character of any line in existing commit messages. core.filesRefLockTimeout The length of time, in milliseconds, to retry when trying to lock an individual reference. Value 0 means not to retry at all; -1 means to try indefinitely. Default is 100 (i.e., retry for 100ms). core.packedRefsTimeout The length of time, in milliseconds, to retry when trying to lock the packed-refs file. Value 0 means not to retry at all; -1 means to try indefinitely. Default is 1000 (i.e., retry for 1 second). core.pager Text viewer for use by Git commands (e.g., less). The value is meant to be interpreted by the shell. The order of preference is the $GIT_PAGER environment variable, then core.pager configuration, then $PAGER, and then the default chosen at compile time (usually less). When the LESS environment variable is unset, Git sets it to FRX (if LESS environment variable is set, Git does not change it at all). If you want to selectively override Gits default setting for LESS, you can set core.pager to e.g. less -S. This will be passed to the shell by Git, which will translate the final command to LESS=FRX less -S. The environment does not set the S option but the command line does, instructing less to truncate long lines. Similarly, setting core.pager to less -+F will deactivate the F option specified by the environment from the command-line, deactivating the "quit if one screen" behavior of less. One can specifically activate some flags for particular commands: for example, setting pager.blame to less -S enables line truncation only for git blame. Likewise, when the LV environment variable is unset, Git sets it to -c. You can override this setting by exporting LV with another value or setting core.pager to lv +c. core.whitespace A comma separated list of common whitespace problems to notice. git diff will use color.diff.whitespace to highlight them, and git apply --whitespace=error will consider them as errors. You can prefix - to disable any of them (e.g. -trailing-space): blank-at-eol treats trailing whitespaces at the end of the line as an error (enabled by default). space-before-tab treats a space character that appears immediately before a tab character in the initial indent part of the line as an error (enabled by default). indent-with-non-tab treats a line that is indented with space characters instead of the equivalent tabs as an error (not enabled by default). tab-in-indent treats a tab character in the initial indent part of the line as an error (not enabled by default). blank-at-eof treats blank lines added at the end of file as an error (enabled by default). trailing-space is a short-hand to cover both blank-at-eol and blank-at-eof. cr-at-eol treats a carriage-return at the end of line as part of the line terminator, i.e. with it, trailing-space does not trigger if the character before such a carriage-return is not a whitespace (not enabled by default). tabwidth=<n> tells how many character positions a tab occupies; this is relevant for indent-with-non-tab and when Git fixes tab-in-indent errors. The default tab width is 8. Allowed values are 1 to 63. core.fsync A comma-separated list of components of the repository that should be hardened via the core.fsyncMethod when created or modified. You can disable hardening of any component by prefixing it with a -. Items that are not hardened may be lost in the event of an unclean system shutdown. Unless you have special requirements, it is recommended that you leave this option empty or pick one of committed, added, or all. When this configuration is encountered, the set of components starts with the platform default value, disabled components are removed, and additional components are added. none resets the state so that the platform default is ignored. The empty string resets the fsync configuration to the platform default. The default on most platforms is equivalent to core.fsync=committed,-loose-object, which has good performance, but risks losing recent work in the event of an unclean system shutdown. none clears the set of fsynced components. loose-object hardens objects added to the repo in loose-object form. pack hardens objects added to the repo in packfile form. pack-metadata hardens packfile bitmaps and indexes. commit-graph hardens the commit-graph file. index hardens the index when it is modified. objects is an aggregate option that is equivalent to loose-object,pack. reference hardens references modified in the repo. derived-metadata is an aggregate option that is equivalent to pack-metadata,commit-graph. committed is an aggregate option that is currently equivalent to objects. This mode sacrifices some performance to ensure that work that is committed to the repository with git commit or similar commands is hardened. added is an aggregate option that is currently equivalent to committed,index. This mode sacrifices additional performance to ensure that the results of commands like git add and similar operations are hardened. all is an aggregate option that syncs all individual components above. core.fsyncMethod A value indicating the strategy Git will use to harden repository data using fsync and related primitives. fsync uses the fsync() system call or platform equivalents. writeout-only issues pagecache writeback requests, but depending on the filesystem and storage hardware, data added to the repository may not be durable in the event of a system crash. This is the default mode on macOS. batch enables a mode that uses writeout-only flushes to stage multiple updates in the disk writeback cache and then does a single full fsync of a dummy file to trigger the disk cache flush at the end of the operation. Currently batch mode only applies to loose-object files. Other repository data is made durable as if fsync was specified. This mode is expected to be as safe as fsync on macOS for repos stored on HFS+ or APFS filesystems and on Windows for repos stored on NTFS or ReFS filesystems. core.fsyncObjectFiles This boolean will enable fsync() when writing object files. This setting is deprecated. Use core.fsync instead. This setting affects data added to the Git repository in loose-object form. When set to true, Git will issue an fsync or similar system call to flush caches so that loose-objects remain consistent in the face of a unclean system shutdown. core.preloadIndex Enable parallel index preload for operations like git diff This can speed up operations like git diff and git status especially on filesystems like NFS that have weak caching semantics and thus relatively high IO latencies. When enabled, Git will do the index comparison to the filesystem data in parallel, allowing overlapping IOs. Defaults to true. core.unsetenvvars Windows-only: comma-separated list of environment variables' names that need to be unset before spawning any other process. Defaults to PERL5LIB to account for the fact that Git for Windows insists on using its own Perl interpreter. core.restrictinheritedhandles Windows-only: override whether spawned processes inherit only standard file handles (stdin, stdout and stderr) or all handles. Can be auto, true or false. Defaults to auto, which means true on Windows 7 and later, and false on older Windows versions. core.createObject You can set this to link, in which case a hardlink followed by a delete of the source are used to make sure that object creation will not overwrite existing objects. On some file system/operating system combinations, this is unreliable. Set this config setting to rename there; However, This will remove the check that makes sure that existing object files will not get overwritten. core.notesRef When showing commit messages, also show notes which are stored in the given ref. The ref must be fully qualified. If the given ref does not exist, it is not an error but means that no notes should be printed. This setting defaults to "refs/notes/commits", and it can be overridden by the GIT_NOTES_REF environment variable. See git-notes(1). core.commitGraph If true, then git will read the commit-graph file (if it exists) to parse the graph structure of commits. Defaults to true. See git-commit-graph(1) for more information. core.useReplaceRefs If set to false, behave as if the --no-replace-objects option was given on the command line. See git(1) and git-replace(1) for more information. core.multiPackIndex Use the multi-pack-index file to track multiple packfiles using a single index. See git-multi-pack-index(1) for more information. Defaults to true. core.sparseCheckout Enable "sparse checkout" feature. See git-sparse-checkout(1) for more information. core.sparseCheckoutCone Enables the "cone mode" of the sparse checkout feature. When the sparse-checkout file contains a limited set of patterns, this mode provides significant performance advantages. The "non-cone mode" can be requested to allow specifying more flexible patterns by setting this variable to false. See git-sparse-checkout(1) for more information. core.abbrev Set the length object names are abbreviated to. If unspecified or set to "auto", an appropriate value is computed based on the approximate number of packed objects in your repository, which hopefully is enough for abbreviated object names to stay unique for some time. If set to "no", no abbreviation is made and the object names are shown in their full length. The minimum length is 4. core.maxTreeDepth The maximum depth Git is willing to recurse while traversing a tree (e.g., "a/b/cde/f" has a depth of 4). This is a fail-safe to allow Git to abort cleanly, and should not generally need to be adjusted. The default is 4096. add.ignoreErrors, add.ignore-errors (deprecated) Tells git add to continue adding files when some files cannot be added due to indexing errors. Equivalent to the --ignore-errors option of git-add(1). add.ignore-errors is deprecated, as it does not follow the usual naming convention for configuration variables. add.interactive.useBuiltin Unused configuration variable. Used in Git versions v2.25.0 to v2.36.0 to enable the built-in version of git-add(1)'s interactive mode, which then became the default in Git versions v2.37.0 to v2.39.0. alias.* Command aliases for the git(1) command wrapper - e.g. after defining alias.last = cat-file commit HEAD, the invocation git last is equivalent to git cat-file commit HEAD. To avoid confusion and troubles with script usage, aliases that hide existing Git commands are ignored. Arguments are split by spaces, the usual shell quoting and escaping are supported. A quote pair or a backslash can be used to quote them. Note that the first word of an alias does not necessarily have to be a command. It can be a command-line option that will be passed into the invocation of git. In particular, this is useful when used with -c to pass in one-time configurations or -p to force pagination. For example, loud-rebase = -c commit.verbose=true rebase can be defined such that running git loud-rebase would be equivalent to git -c commit.verbose=true rebase. Also, ps = -p status would be a helpful alias since git ps would paginate the output of git status where the original command does not. If the alias expansion is prefixed with an exclamation point, it will be treated as a shell command. For example, defining alias.new = !gitk --all --not ORIG_HEAD, the invocation git new is equivalent to running the shell command gitk --all --not ORIG_HEAD. Note that shell commands will be executed from the top-level directory of a repository, which may not necessarily be the current directory. GIT_PREFIX is set as returned by running git rev-parse --show-prefix from the original current directory. See git-rev-parse(1). am.keepcr If true, git-am will call git-mailsplit for patches in mbox format with parameter --keep-cr. In this case git-mailsplit will not remove \r from lines ending with \r\n. Can be overridden by giving --no-keep-cr from the command line. See git-am(1), git-mailsplit(1). am.threeWay By default, git am will fail if the patch does not apply cleanly. When set to true, this setting tells git am to fall back on 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally (equivalent to giving the --3way option from the command line). Defaults to false. See git-am(1). apply.ignoreWhitespace When set to change, tells git apply to ignore changes in whitespace, in the same way as the --ignore-space-change option. When set to one of: no, none, never, false, it tells git apply to respect all whitespace differences. See git-apply(1). apply.whitespace Tells git apply how to handle whitespace, in the same way as the --whitespace option. See git-apply(1). blame.blankBoundary Show blank commit object name for boundary commits in git-blame(1). This option defaults to false. blame.coloring This determines the coloring scheme to be applied to blame output. It can be repeatedLines, highlightRecent, or none which is the default. blame.date Specifies the format used to output dates in git-blame(1). If unset the iso format is used. For supported values, see the discussion of the --date option at git-log(1). blame.showEmail Show the author email instead of author name in git-blame(1). This option defaults to false. blame.showRoot Do not treat root commits as boundaries in git-blame(1). This option defaults to false. blame.ignoreRevsFile Ignore revisions listed in the file, one unabbreviated object name per line, in git-blame(1). Whitespace and comments beginning with # are ignored. This option may be repeated multiple times. Empty file names will reset the list of ignored revisions. This option will be handled before the command line option --ignore-revs-file. blame.markUnblamableLines Mark lines that were changed by an ignored revision that we could not attribute to another commit with a * in the output of git-blame(1). blame.markIgnoredLines Mark lines that were changed by an ignored revision that we attributed to another commit with a ? in the output of git-blame(1). branch.autoSetupMerge Tells git branch, git switch and git checkout to set up new branches so that git-pull(1) will appropriately merge from the starting point branch. Note that even if this option is not set, this behavior can be chosen per-branch using the --track and --no-track options. The valid settings are: false no automatic setup is done; true automatic setup is done when the starting point is a remote-tracking branch; always automatic setup is done when the starting point is either a local branch or remote-tracking branch; inherit if the starting point has a tracking configuration, it is copied to the new branch; simple automatic setup is done only when the starting point is a remote-tracking branch and the new branch has the same name as the remote branch. This option defaults to true. branch.autoSetupRebase When a new branch is created with git branch, git switch or git checkout that tracks another branch, this variable tells Git to set up pull to rebase instead of merge (see "branch.<name>.rebase"). When never, rebase is never automatically set to true. When local, rebase is set to true for tracked branches of other local branches. When remote, rebase is set to true for tracked branches of remote-tracking branches. When always, rebase will be set to true for all tracking branches. See "branch.autoSetupMerge" for details on how to set up a branch to track another branch. This option defaults to never. branch.sort This variable controls the sort ordering of branches when displayed by git-branch(1). Without the "--sort=<value>" option provided, the value of this variable will be used as the default. See git-for-each-ref(1) field names for valid values. branch.<name>.remote When on branch <name>, it tells git fetch and git push which remote to fetch from or push to. The remote to push to may be overridden with remote.pushDefault (for all branches). The remote to push to, for the current branch, may be further overridden by branch.<name>.pushRemote. If no remote is configured, or if you are not on any branch and there is more than one remote defined in the repository, it defaults to origin for fetching and remote.pushDefault for pushing. Additionally, . (a period) is the current local repository (a dot-repository), see branch.<name>.merge's final note below. branch.<name>.pushRemote When on branch <name>, it overrides branch.<name>.remote for pushing. It also overrides remote.pushDefault for pushing from branch <name>. When you pull from one place (e.g. your upstream) and push to another place (e.g. your own publishing repository), you would want to set remote.pushDefault to specify the remote to push to for all branches, and use this option to override it for a specific branch. branch.<name>.merge Defines, together with branch.<name>.remote, the upstream branch for the given branch. It tells git fetch/git pull/git rebase which branch to merge and can also affect git push (see push.default). When in branch <name>, it tells git fetch the default refspec to be marked for merging in FETCH_HEAD. The value is handled like the remote part of a refspec, and must match a ref which is fetched from the remote given by "branch.<name>.remote". The merge information is used by git pull (which first calls git fetch) to lookup the default branch for merging. Without this option, git pull defaults to merge the first refspec fetched. Specify multiple values to get an octopus merge. If you wish to setup git pull so that it merges into <name> from another branch in the local repository, you can point branch.<name>.merge to the desired branch, and use the relative path setting . (a period) for branch.<name>.remote. branch.<name>.mergeOptions Sets default options for merging into branch <name>. The syntax and supported options are the same as those of git-merge(1), but option values containing whitespace characters are currently not supported. branch.<name>.rebase When true, rebase the branch <name> on top of the fetched branch, instead of merging the default branch from the default remote when "git pull" is run. See "pull.rebase" for doing this in a non branch-specific manner. When merges (or just m), pass the --rebase-merges option to git rebase so that the local merge commits are included in the rebase (see git-rebase(1) for details). When the value is interactive (or just i), the rebase is run in interactive mode. NOTE: this is a possibly dangerous operation; do not use it unless you understand the implications (see git-rebase(1) for details). branch.<name>.description Branch description, can be edited with git branch --edit-description. Branch description is automatically added to the format-patch cover letter or request-pull summary. browser.<tool>.cmd Specify the command to invoke the specified browser. The specified command is evaluated in shell with the URLs passed as arguments. (See git-web--browse(1).) browser.<tool>.path Override the path for the given tool that may be used to browse HTML help (see -w option in git-help(1)) or a working repository in gitweb (see git-instaweb(1)). bundle.* The bundle.* keys may appear in a bundle list file found via the git clone --bundle-uri option. These keys currently have no effect if placed in a repository config file, though this will change in the future. See the bundle URI design document[1] for more details. bundle.version This integer value advertises the version of the bundle list format used by the bundle list. Currently, the only accepted value is 1. bundle.mode This string value should be either all or any. This value describes whether all of the advertised bundles are required to unbundle a complete understanding of the bundled information (all) or if any one of the listed bundle URIs is sufficient (any). bundle.heuristic If this string-valued key exists, then the bundle list is designed to work well with incremental git fetch commands. The heuristic signals that there are additional keys available for each bundle that help determine which subset of bundles the client should download. The only value currently understood is creationToken. bundle.<id>.* The bundle.<id>.* keys are used to describe a single item in the bundle list, grouped under <id> for identification purposes. bundle.<id>.uri This string value defines the URI by which Git can reach the contents of this <id>. This URI may be a bundle file or another bundle list. checkout.defaultRemote When you run git checkout <something> or git switch <something> and only have one remote, it may implicitly fall back on checking out and tracking e.g. origin/<something>. This stops working as soon as you have more than one remote with a <something> reference. This setting allows for setting the name of a preferred remote that should always win when it comes to disambiguation. The typical use-case is to set this to origin. Currently this is used by git-switch(1) and git-checkout(1) when git checkout <something> or git switch <something> will checkout the <something> branch on another remote, and by git-worktree(1) when git worktree add refers to a remote branch. This setting might be used for other checkout-like commands or functionality in the future. checkout.guess Provides the default value for the --guess or --no-guess option in git checkout and git switch. See git-switch(1) and git-checkout(1). checkout.workers The number of parallel workers to use when updating the working tree. The default is one, i.e. sequential execution. If set to a value less than one, Git will use as many workers as the number of logical cores available. This setting and checkout.thresholdForParallelism affect all commands that perform checkout. E.g. checkout, clone, reset, sparse-checkout, etc. Note: Parallel checkout usually delivers better performance for repositories located on SSDs or over NFS. For repositories on spinning disks and/or machines with a small number of cores, the default sequential checkout often performs better. The size and compression level of a repository might also influence how well the parallel version performs. checkout.thresholdForParallelism When running parallel checkout with a small number of files, the cost of subprocess spawning and inter-process communication might outweigh the parallelization gains. This setting allows you to define the minimum number of files for which parallel checkout should be attempted. The default is 100. clean.requireForce A boolean to make git-clean do nothing unless given -f, -i, or -n. Defaults to true. clone.defaultRemoteName The name of the remote to create when cloning a repository. Defaults to origin, and can be overridden by passing the --origin command-line option to git-clone(1). clone.rejectShallow Reject cloning a repository if it is a shallow one; this can be overridden by passing the --reject-shallow option on the command line. See git-clone(1) clone.filterSubmodules If a partial clone filter is provided (see --filter in git-rev-list(1)) and --recurse-submodules is used, also apply the filter to submodules. color.advice A boolean to enable/disable color in hints (e.g. when a push failed, see advice.* for a list). May be set to always, false (or never) or auto (or true), in which case colors are used only when the error output goes to a terminal. If unset, then the value of color.ui is used (auto by default). color.advice.hint Use customized color for hints. color.blame.highlightRecent Specify the line annotation color for git blame --color-by-age depending upon the age of the line. This setting should be set to a comma-separated list of color and date settings, starting and ending with a color, the dates should be set from oldest to newest. The metadata will be colored with the specified colors if the line was introduced before the given timestamp, overwriting older timestamped colors. Instead of an absolute timestamp relative timestamps work as well, e.g. 2.weeks.ago is valid to address anything older than 2 weeks. It defaults to blue,12 month ago,white,1 month ago,red, which colors everything older than one year blue, recent changes between one month and one year old are kept white, and lines introduced within the last month are colored red. color.blame.repeatedLines Use the specified color to colorize line annotations for git blame --color-lines, if they come from the same commit as the preceding line. Defaults to cyan. color.branch A boolean to enable/disable color in the output of git-branch(1). May be set to always, false (or never) or auto (or true), in which case colors are used only when the output is to a terminal. If unset, then the value of color.ui is used (auto by default). color.branch.<slot> Use customized color for branch coloration. <slot> is one of current (the current branch), local (a local branch), remote (a remote-tracking branch in refs/remotes/), upstream (upstream tracking branch), plain (other refs). color.diff Whether to use ANSI escape sequences to add color to patches. If this is set to always, git-diff(1), git-log(1), and git-show(1) will use color for all patches. If it is set to true or auto, those commands will only use color when output is to the terminal. If unset, then the value of color.ui is used (auto by default). This does not affect git-format-patch(1) or the git-diff-* plumbing commands. Can be overridden on the command line with the --color[=<when>] option. color.diff.<slot> Use customized color for diff colorization. <slot> specifies which part of the patch to use the specified color, and is one of context (context text - plain is a historical synonym), meta (metainformation), frag (hunk header), func (function in hunk header), old (removed lines), new (added lines), commit (commit headers), whitespace (highlighting whitespace errors), oldMoved (deleted lines), newMoved (added lines), oldMovedDimmed, oldMovedAlternative, oldMovedAlternativeDimmed, newMovedDimmed, newMovedAlternative newMovedAlternativeDimmed (See the <mode> setting of --color-moved in git-diff(1) for details), contextDimmed, oldDimmed, newDimmed, contextBold, oldBold, and newBold (see git-range-diff(1) for details). color.decorate.<slot> Use customized color for git log --decorate output. <slot> is one of branch, remoteBranch, tag, stash or HEAD for local branches, remote-tracking branches, tags, stash and HEAD, respectively and grafted for grafted commits. color.grep When set to always, always highlight matches. When false (or never), never. When set to true or auto, use color only when the output is written to the terminal. If unset, then the value of color.ui is used (auto by default). color.grep.<slot> Use customized color for grep colorization. <slot> specifies which part of the line to use the specified color, and is one of context non-matching text in context lines (when using -A, -B, or -C) filename filename prefix (when not using -h) function function name lines (when using -p) lineNumber line number prefix (when using -n) column column number prefix (when using --column) match matching text (same as setting matchContext and matchSelected) matchContext matching text in context lines matchSelected matching text in selected lines. Also, used to customize the following git-log(1) subcommands: --grep, --author, and --committer. selected non-matching text in selected lines. Also, used to customize the following git-log(1) subcommands: --grep, --author and --committer. separator separators between fields on a line (:, -, and =) and between hunks (--) color.interactive When set to always, always use colors for interactive prompts and displays (such as those used by "git-add --interactive" and "git-clean --interactive"). When false (or never), never. When set to true or auto, use colors only when the output is to the terminal. If unset, then the value of color.ui is used (auto by default). color.interactive.<slot> Use customized color for git add --interactive and git clean --interactive output. <slot> may be prompt, header, help or error, for four distinct types of normal output from interactive commands. color.pager A boolean to specify whether auto color modes should colorize output going to the pager. Defaults to true; set this to false if your pager does not understand ANSI color codes. color.push A boolean to enable/disable color in push errors. May be set to always, false (or never) or auto (or true), in which case colors are used only when the error output goes to a terminal. If unset, then the value of color.ui is used (auto by default). color.push.error Use customized color for push errors. color.remote If set, keywords at the start of the line are highlighted. The keywords are "error", "warning", "hint" and "success", and are matched case-insensitively. May be set to always, false (or never) or auto (or true). If unset, then the value of color.ui is used (auto by default). color.remote.<slot> Use customized color for each remote keyword. <slot> may be hint, warning, success or error which match the corresponding keyword. color.showBranch A boolean to enable/disable color in the output of git-show-branch(1). May be set to always, false (or never) or auto (or true), in which case colors are used only when the output is to a terminal. If unset, then the value of color.ui is used (auto by default). color.status A boolean to enable/disable color in the output of git-status(1). May be set to always, false (or never) or auto (or true), in which case colors are used only when the output is to a terminal. If unset, then the value of color.ui is used (auto by default). color.status.<slot> Use customized color for status colorization. <slot> is one of header (the header text of the status message), added or updated (files which are added but not committed), changed (files which are changed but not added in the index), untracked (files which are not tracked by Git), branch (the current branch), nobranch (the color the no branch warning is shown in, defaulting to red), localBranch or remoteBranch (the local and remote branch names, respectively, when branch and tracking information is displayed in the status short-format), or unmerged (files which have unmerged changes). color.transport A boolean to enable/disable color when pushes are rejected. May be set to always, false (or never) or auto (or true), in which case colors are used only when the error output goes to a terminal. If unset, then the value of color.ui is used (auto by default). color.transport.rejected Use customized color when a push was rejected. color.ui This variable determines the default value for variables such as color.diff and color.grep that control the use of color per command family. Its scope will expand as more commands learn configuration to set a default for the --color option. Set it to false or never if you prefer Git commands not to use color unless enabled explicitly with some other configuration or the --color option. Set it to always if you want all output not intended for machine consumption to use color, to true or auto (this is the default since Git 1.8.4) if you want such output to use color when written to the terminal. column.ui Specify whether supported commands should output in columns. This variable consists of a list of tokens separated by spaces or commas: These options control when the feature should be enabled (defaults to never): always always show in columns never never show in columns auto show in columns if the output is to the terminal These options control layout (defaults to column). Setting any of these implies always if none of always, never, or auto are specified. column fill columns before rows row fill rows before columns plain show in one column Finally, these options can be combined with a layout option (defaults to nodense): dense make unequal size columns to utilize more space nodense make equal size columns column.branch Specify whether to output branch listing in git branch in columns. See column.ui for details. column.clean Specify the layout when listing items in git clean -i, which always shows files and directories in columns. See column.ui for details. column.status Specify whether to output untracked files in git status in columns. See column.ui for details. column.tag Specify whether to output tag listings in git tag in columns. See column.ui for details. commit.cleanup This setting overrides the default of the --cleanup option in git commit. See git-commit(1) for details. Changing the default can be useful when you always want to keep lines that begin with the comment character # in your log message, in which case you would do git config commit.cleanup whitespace (note that you will have to remove the help lines that begin with # in the commit log template yourself, if you do this). commit.gpgSign A boolean to specify whether all commits should be GPG signed. Use of this option when doing operations such as rebase can result in a large number of commits being signed. It may be convenient to use an agent to avoid typing your GPG passphrase several times. commit.status A boolean to enable/disable inclusion of status information in the commit message template when using an editor to prepare the commit message. Defaults to true. commit.template Specify the pathname of a file to use as the template for new commit messages. commit.verbose A boolean or int to specify the level of verbosity with git commit. See git-commit(1). commitGraph.generationVersion Specifies the type of generation number version to use when writing or reading the commit-graph file. If version 1 is specified, then the corrected commit dates will not be written or read. Defaults to 2. commitGraph.maxNewFilters Specifies the default value for the --max-new-filters option of git commit-graph write (c.f., git-commit-graph(1)). commitGraph.readChangedPaths If true, then git will use the changed-path Bloom filters in the commit-graph file (if it exists, and they are present). Defaults to true. See git-commit-graph(1) for more information. credential.helper Specify an external helper to be called when a username or password credential is needed; the helper may consult external storage to avoid prompting the user for the credentials. This is normally the name of a credential helper with possible arguments, but may also be an absolute path with arguments or, if preceded by !, shell commands. Note that multiple helpers may be defined. See gitcredentials(7) for details and examples. credential.useHttpPath When acquiring credentials, consider the "path" component of an http or https URL to be important. Defaults to false. See gitcredentials(7) for more information. credential.username If no username is set for a network authentication, use this username by default. See credential.<context>.* below, and gitcredentials(7). credential.<url>.* Any of the credential.* options above can be applied selectively to some credentials. For example, "credential.https://example.com.username" would set the default username only for https connections to example.com. See gitcredentials(7) for details on how URLs are matched. credentialCache.ignoreSIGHUP Tell git-credential-cachedaemon to ignore SIGHUP, instead of quitting. credentialStore.lockTimeoutMS The length of time, in milliseconds, for git-credential-store to retry when trying to lock the credentials file. A value of 0 means not to retry at all; -1 means to try indefinitely. Default is 1000 (i.e., retry for 1s). completion.commands This is only used by git-completion.bash to add or remove commands from the list of completed commands. Normally only porcelain commands and a few select others are completed. You can add more commands, separated by space, in this variable. Prefixing the command with - will remove it from the existing list. diff.autoRefreshIndex When using git diff to compare with work tree files, do not consider stat-only changes as changed. Instead, silently run git update-index --refresh to update the cached stat information for paths whose contents in the work tree match the contents in the index. This option defaults to true. Note that this affects only git diff Porcelain, and not lower level diff commands such as git diff-files. diff.dirstat A comma separated list of --dirstat parameters specifying the default behavior of the --dirstat option to git-diff(1) and friends. The defaults can be overridden on the command line (using --dirstat=<param1,param2,...>). The fallback defaults (when not changed by diff.dirstat) are changes,noncumulative,3. The following parameters are available: changes Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. lines Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive --dirstat behavior than the changes behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other --*stat options. files Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest --dirstat behavior, since it does not have to look at the file contents at all. cumulative Count changes in a child directory for the parent directory as well. Note that when using cumulative, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the noncumulative parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: files,10,cumulative. diff.statNameWidth Limit the width of the filename part in --stat output. If set, applies to all commands generating --stat output except format-patch. diff.statGraphWidth Limit the width of the graph part in --stat output. If set, applies to all commands generating --stat output except format-patch. diff.context Generate diffs with <n> lines of context instead of the default of 3. This value is overridden by the -U option. diff.interHunkContext Show the context between diff hunks, up to the specified number of lines, thereby fusing the hunks that are close to each other. This value serves as the default for the --inter-hunk-context command line option. diff.external If this config variable is set, diff generation is not performed using the internal diff machinery, but using the given command. Can be overridden with the GIT_EXTERNAL_DIFF environment variable. The command is called with parameters as described under "git Diffs" in git(1). Note: if you want to use an external diff program only on a subset of your files, you might want to use gitattributes(5) instead. diff.ignoreSubmodules Sets the default value of --ignore-submodules. Note that this affects only git diff Porcelain, and not lower level diff commands such as git diff-files. git checkout and git switch also honor this setting when reporting uncommitted changes. Setting it to all disables the submodule summary normally shown by git commit and git status when status.submoduleSummary is set unless it is overridden by using the --ignore-submodules command-line option. The git submodule commands are not affected by this setting. By default this is set to untracked so that any untracked submodules are ignored. diff.mnemonicPrefix If set, git diff uses a prefix pair that is different from the standard "a/" and "b/" depending on what is being compared. When this configuration is in effect, reverse diff output also swaps the order of the prefixes: git diff compares the (i)ndex and the (w)ork tree; git diff HEAD compares a (c)ommit and the (w)ork tree; git diff --cached compares a (c)ommit and the (i)ndex; git diff HEAD:file1 file2 compares an (o)bject and a (w)ork tree entity; git diff --no-index a b compares two non-git things (1) and (2). diff.noprefix If set, git diff does not show any source or destination prefix. diff.relative If set to true, git diff does not show changes outside of the directory and show pathnames relative to the current directory. diff.orderFile File indicating how to order files within a diff. See the -O option to git-diff(1) for details. If diff.orderFile is a relative pathname, it is treated as relative to the top of the working tree. diff.renameLimit The number of files to consider in the exhaustive portion of copy/rename detection; equivalent to the git diff option -l. If not set, the default value is currently 1000. This setting has no effect if rename detection is turned off. diff.renames Whether and how Git detects renames. If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. If set to "copies" or "copy", Git will detect copies, as well. Defaults to true. Note that this affects only git diff Porcelain like git-diff(1) and git-log(1), and not lower level commands such as git-diff-files(1). diff.suppressBlankEmpty A boolean to inhibit the standard behavior of printing a space before each empty output line. Defaults to false. diff.submodule Specify the format in which differences in submodules are shown. The "short" format just shows the names of the commits at the beginning and end of the range. The "log" format lists the commits in the range like git-submodule(1) summary does. The "diff" format shows an inline diff of the changed contents of the submodule. Defaults to "short". diff.wordRegex A POSIX Extended Regular Expression used to determine what is a "word" when performing word-by-word difference calculations. Character sequences that match the regular expression are "words", all other characters are ignorable whitespace. diff.<driver>.command The custom diff driver command. See gitattributes(5) for details. diff.<driver>.xfuncname The regular expression that the diff driver should use to recognize the hunk header. A built-in pattern may also be used. See gitattributes(5) for details. diff.<driver>.binary Set this option to true to make the diff driver treat files as binary. See gitattributes(5) for details. diff.<driver>.textconv The command that the diff driver should call to generate the text-converted version of a file. The result of the conversion is used to generate a human-readable diff. See gitattributes(5) for details. diff.<driver>.wordRegex The regular expression that the diff driver should use to split words in a line. See gitattributes(5) for details. diff.<driver>.cachetextconv Set this option to true to make the diff driver cache the text conversion outputs. See gitattributes(5) for details. araxis Use Araxis Merge (requires a graphical session) bc Use Beyond Compare (requires a graphical session) bc3 Use Beyond Compare (requires a graphical session) bc4 Use Beyond Compare (requires a graphical session) codecompare Use Code Compare (requires a graphical session) deltawalker Use DeltaWalker (requires a graphical session) diffmerge Use DiffMerge (requires a graphical session) diffuse Use Diffuse (requires a graphical session) ecmerge Use ECMerge (requires a graphical session) emerge Use Emacs' Emerge examdiff Use ExamDiff Pro (requires a graphical session) guiffy Use Guiffys Diff Tool (requires a graphical session) gvimdiff Use gVim (requires a graphical session) kdiff3 Use KDiff3 (requires a graphical session) kompare Use Kompare (requires a graphical session) meld Use Meld (requires a graphical session) nvimdiff Use Neovim opendiff Use FileMerge (requires a graphical session) p4merge Use HelixCore P4Merge (requires a graphical session) smerge Use Sublime Merge (requires a graphical session) tkdiff Use TkDiff (requires a graphical session) vimdiff Use Vim winmerge Use WinMerge (requires a graphical session) xxdiff Use xxdiff (requires a graphical session) diff.indentHeuristic Set this option to false to disable the default heuristics that shift diff hunk boundaries to make patches easier to read. diff.algorithm Choose a diff algorithm. The variants are as follows: default, myers The basic greedy diff algorithm. Currently, this is the default. minimal Spend extra time to make sure the smallest possible diff is produced. patience Use "patience diff" algorithm when generating patches. histogram This algorithm extends the patience algorithm to "support low-occurrence common elements". diff.wsErrorHighlight Highlight whitespace errors in the context, old or new lines of the diff. Multiple values are separated by comma, none resets previous values, default reset the list to new and all is a shorthand for old,new,context. The whitespace errors are colored with color.diff.whitespace. The command line option --ws-error-highlight=<kind> overrides this setting. diff.colorMoved If set to either a valid <mode> or a true value, moved lines in a diff are colored differently, for details of valid modes see --color-moved in git-diff(1). If simply set to true the default color mode will be used. When set to false, moved lines are not colored. diff.colorMovedWS When moved lines are colored using e.g. the diff.colorMoved setting, this option controls the <mode> how spaces are treated for details of valid modes see --color-moved-ws in git-diff(1). diff.tool Controls which diff tool is used by git-difftool(1). This variable overrides the value configured in merge.tool. The list below shows the valid built-in values. Any other value is treated as a custom diff tool and requires that a corresponding difftool.<tool>.cmd variable is defined. diff.guitool Controls which diff tool is used by git-difftool(1) when the -g/--gui flag is specified. This variable overrides the value configured in merge.guitool. The list below shows the valid built-in values. Any other value is treated as a custom diff tool and requires that a corresponding difftool.<guitool>.cmd variable is defined. difftool.<tool>.cmd Specify the command to invoke the specified diff tool. The specified command is evaluated in shell with the following variables available: LOCAL is set to the name of the temporary file containing the contents of the diff pre-image and REMOTE is set to the name of the temporary file containing the contents of the diff post-image. See the --tool=<tool> option in git-difftool(1) for more details. difftool.<tool>.path Override the path for the given tool. This is useful in case your tool is not in the PATH. difftool.trustExitCode Exit difftool if the invoked diff tool returns a non-zero exit status. See the --trust-exit-code option in git-difftool(1) for more details. difftool.prompt Prompt before each invocation of the diff tool. difftool.guiDefault Set true to use the diff.guitool by default (equivalent to specifying the --gui argument), or auto to select diff.guitool or diff.tool depending on the presence of a DISPLAY environment variable value. The default is false, where the --gui argument must be provided explicitly for the diff.guitool to be used. extensions.objectFormat Specify the hash algorithm to use. The acceptable values are sha1 and sha256. If not specified, sha1 is assumed. It is an error to specify this key unless core.repositoryFormatVersion is 1. Note that this setting should only be set by git-init(1) or git-clone(1). Trying to change it after initialization will not work and will produce hard-to-diagnose issues. extensions.worktreeConfig If enabled, then worktrees will load config settings from the $GIT_DIR/config.worktree file in addition to the $GIT_COMMON_DIR/config file. Note that $GIT_COMMON_DIR and $GIT_DIR are the same for the main working tree, while other working trees have $GIT_DIR equal to $GIT_COMMON_DIR/worktrees/<id>/. The settings in the config.worktree file will override settings from any other config files. When enabling extensions.worktreeConfig, you must be careful to move certain values from the common config file to the main working trees config.worktree file, if present: core.worktree must be moved from $GIT_COMMON_DIR/config to $GIT_COMMON_DIR/config.worktree. If core.bare is true, then it must be moved from $GIT_COMMON_DIR/config to $GIT_COMMON_DIR/config.worktree. It may also be beneficial to adjust the locations of core.sparseCheckout and core.sparseCheckoutCone depending on your desire for customizable sparse-checkout settings for each worktree. By default, the git sparse-checkout builtin enables extensions.worktreeConfig, assigns these config values on a per-worktree basis, and uses the $GIT_DIR/info/sparse-checkout file to specify the sparsity for each worktree independently. See git-sparse-checkout(1) for more details. For historical reasons, extensions.worktreeConfig is respected regardless of the core.repositoryFormatVersion setting. fastimport.unpackLimit If the number of objects imported by git-fast-import(1) is below this limit, then the objects will be unpacked into loose object files. However, if the number of imported objects equals or exceeds this limit, then the pack will be stored as a pack. Storing the pack from a fast-import can make the import operation complete faster, especially on slow filesystems. If not set, the value of transfer.unpackLimit is used instead. feature.* The config settings that start with feature. modify the defaults of a group of other config settings. These groups are created by the Git developer community as recommended defaults and are subject to change. In particular, new config options may be added with different defaults. feature.experimental Enable config options that are new to Git, and are being considered for future defaults. Config settings included here may be added or removed with each release, including minor version updates. These settings may have unintended interactions since they are so new. Please enable this setting if you are interested in providing feedback on experimental features. The new default values are: fetch.negotiationAlgorithm=skipping may improve fetch negotiation times by skipping more commits at a time, reducing the number of round trips. pack.useBitmapBoundaryTraversal=true may improve bitmap traversal times by walking fewer objects. feature.manyFiles Enable config options that optimize for repos with many files in the working directory. With many files, commands such as git status and git checkout may be slow and these new defaults improve performance: index.skipHash=true speeds up index writes by not computing a trailing checksum. Note that this will cause Git versions earlier than 2.13.0 to refuse to parse the index and Git versions earlier than 2.40.0 will report a corrupted index during git fsck. index.version=4 enables path-prefix compression in the index. core.untrackedCache=true enables the untracked cache. This setting assumes that mtime is working on your machine. fetch.recurseSubmodules This option controls whether git fetch (and the underlying fetch in git pull) will recursively fetch into populated submodules. This option can be set either to a boolean value or to on-demand. Setting it to a boolean changes the behavior of fetch and pull to recurse unconditionally into submodules when set to true or to not recurse at all when set to false. When set to on-demand, fetch and pull will only recurse into a populated submodule when its superproject retrieves a commit that updates the submodules reference. Defaults to on-demand, or to the value of submodule.recurse if set. fetch.fsckObjects If it is set to true, git-fetch-pack will check all fetched objects. See transfer.fsckObjects for whats checked. Defaults to false. If not set, the value of transfer.fsckObjects is used instead. fetch.fsck.<msg-id> Acts like fsck.<msg-id>, but is used by git-fetch-pack(1) instead of git-fsck(1). See the fsck.<msg-id> documentation for details. fetch.fsck.skipList Acts like fsck.skipList, but is used by git-fetch-pack(1) instead of git-fsck(1). See the fsck.skipList documentation for details. fetch.unpackLimit If the number of objects fetched over the Git native transfer is below this limit, then the objects will be unpacked into loose object files. However if the number of received objects equals or exceeds this limit then the received pack will be stored as a pack, after adding any missing delta bases. Storing the pack from a push can make the push operation complete faster, especially on slow filesystems. If not set, the value of transfer.unpackLimit is used instead. fetch.prune If true, fetch will automatically behave as if the --prune option was given on the command line. See also remote.<name>.prune and the PRUNING section of git-fetch(1). fetch.pruneTags If true, fetch will automatically behave as if the refs/tags/*:refs/tags/* refspec was provided when pruning, if not set already. This allows for setting both this option and fetch.prune to maintain a 1=1 mapping to upstream refs. See also remote.<name>.pruneTags and the PRUNING section of git-fetch(1). fetch.output Control how ref update status is printed. Valid values are full and compact. Default value is full. See the OUTPUT section in git-fetch(1) for details. fetch.negotiationAlgorithm Control how information about the commits in the local repository is sent when negotiating the contents of the packfile to be sent by the server. Set to "consecutive" to use an algorithm that walks over consecutive commits checking each one. Set to "skipping" to use an algorithm that skips commits in an effort to converge faster, but may result in a larger-than-necessary packfile; or set to "noop" to not send any information at all, which will almost certainly result in a larger-than-necessary packfile, but will skip the negotiation step. Set to "default" to override settings made previously and use the default behaviour. The default is normally "consecutive", but if feature.experimental is true, then the default is "skipping". Unknown values will cause git fetch to error out. See also the --negotiate-only and --negotiation-tip options to git-fetch(1). fetch.showForcedUpdates Set to false to enable --no-show-forced-updates in git-fetch(1) and git-pull(1) commands. Defaults to true. fetch.parallel Specifies the maximal number of fetch operations to be run in parallel at a time (submodules, or remotes when the --multiple option of git-fetch(1) is in effect). A value of 0 will give some reasonable default. If unset, it defaults to 1. For submodules, this setting can be overridden using the submodule.fetchJobs config setting. fetch.writeCommitGraph Set to true to write a commit-graph after every git fetch command that downloads a pack-file from a remote. Using the --split option, most executions will create a very small commit-graph file on top of the existing commit-graph file(s). Occasionally, these files will merge and the write may take longer. Having an updated commit-graph file helps performance of many Git commands, including git merge-base, git push -f, and git log --graph. Defaults to false. fetch.bundleURI This value stores a URI for downloading Git object data from a bundle URI before performing an incremental fetch from the origin Git server. This is similar to how the --bundle-uri option behaves in git-clone(1). git clone --bundle-uri will set the fetch.bundleURI value if the supplied bundle URI contains a bundle list that is organized for incremental fetches. If you modify this value and your repository has a fetch.bundleCreationToken value, then remove that fetch.bundleCreationToken value before fetching from the new bundle URI. fetch.bundleCreationToken When using fetch.bundleURI to fetch incrementally from a bundle list that uses the "creationToken" heuristic, this config value stores the maximum creationToken value of the downloaded bundles. This value is used to prevent downloading bundles in the future if the advertised creationToken is not strictly larger than this value. The creation token values are chosen by the provider serving the specific bundle URI. If you modify the URI at fetch.bundleURI, then be sure to remove the value for the fetch.bundleCreationToken value before fetching. format.attach Enable multipart/mixed attachments as the default for format-patch. The value can also be a double quoted string which will enable attachments as the default and set the value as the boundary. See the --attach option in git-format-patch(1). To countermand an earlier value, set it to an empty string. format.from Provides the default value for the --from option to format-patch. Accepts a boolean value, or a name and email address. If false, format-patch defaults to --no-from, using commit authors directly in the "From:" field of patch mails. If true, format-patch defaults to --from, using your committer identity in the "From:" field of patch mails and including a "From:" field in the body of the patch mail if different. If set to a non-boolean value, format-patch uses that value instead of your committer identity. Defaults to false. format.forceInBodyFrom Provides the default value for the --[no-]force-in-body-from option to format-patch. Defaults to false. format.numbered A boolean which can enable or disable sequence numbers in patch subjects. It defaults to "auto" which enables it only if there is more than one patch. It can be enabled or disabled for all messages by setting it to "true" or "false". See --numbered option in git-format-patch(1). format.headers Additional email headers to include in a patch to be submitted by mail. See git-format-patch(1). format.to, format.cc Additional recipients to include in a patch to be submitted by mail. See the --to and --cc options in git-format-patch(1). format.subjectPrefix The default for format-patch is to output files with the [PATCH] subject prefix. Use this variable to change that prefix. format.coverFromDescription The default mode for format-patch to determine which parts of the cover letter will be populated using the branchs description. See the --cover-from-description option in git-format-patch(1). format.signature The default for format-patch is to output a signature containing the Git version number. Use this variable to change that default. Set this variable to the empty string ("") to suppress signature generation. format.signatureFile Works just like format.signature except the contents of the file specified by this variable will be used as the signature. format.suffix The default for format-patch is to output files with the suffix .patch. Use this variable to change that suffix (make sure to include the dot if you want it). format.encodeEmailHeaders Encode email headers that have non-ASCII characters with "Q-encoding" (described in RFC 2047) for email transmission. Defaults to true. format.pretty The default pretty format for log/show/whatchanged command. See git-log(1), git-show(1), git-whatchanged(1). format.thread The default threading style for git format-patch. Can be a boolean value, or shallow or deep. shallow threading makes every mail a reply to the head of the series, where the head is chosen from the cover letter, the --in-reply-to, and the first patch mail, in this order. deep threading makes every mail a reply to the previous one. A true boolean value is the same as shallow, and a false value disables threading. format.signOff A boolean value which lets you enable the -s/--signoff option of format-patch by default. Note: Adding the Signed-off-by trailer to a patch should be a conscious act and means that you certify you have the rights to submit this work under the same open source license. Please see the SubmittingPatches document for further discussion. format.coverLetter A boolean that controls whether to generate a cover-letter when format-patch is invoked, but in addition can be set to "auto", to generate a cover-letter only when theres more than one patch. Default is false. format.outputDirectory Set a custom directory to store the resulting files instead of the current working directory. All directory components will be created. format.filenameMaxLength The maximum length of the output filenames generated by the format-patch command; defaults to 64. Can be overridden by the --filename-max-length=<n> command line option. format.useAutoBase A boolean value which lets you enable the --base=auto option of format-patch by default. Can also be set to "whenAble" to allow enabling --base=auto if a suitable base is available, but to skip adding base info otherwise without the format dying. format.notes Provides the default value for the --notes option to format-patch. Accepts a boolean value, or a ref which specifies where to get notes. If false, format-patch defaults to --no-notes. If true, format-patch defaults to --notes. If set to a non-boolean value, format-patch defaults to --notes=<ref>, where ref is the non-boolean value. Defaults to false. If one wishes to use the ref ref/notes/true, please use that literal instead. This configuration can be specified multiple times in order to allow multiple notes refs to be included. In that case, it will behave similarly to multiple --[no-]notes[=] options passed in. That is, a value of true will show the default notes, a value of <ref> will also show notes from that notes ref and a value of false will negate previous configurations and not show notes. For example, [format] notes = true notes = foo notes = false notes = bar will only show notes from refs/notes/bar. format.mboxrd A boolean value which enables the robust "mboxrd" format when --stdout is in use to escape "^>+From " lines. format.noprefix If set, do not show any source or destination prefix in patches. This is equivalent to the diff.noprefix option used by git diff (but which is not respected by format-patch). Note that by setting this, the receiver of any patches you generate will have to apply them using the -p0 option. filter.<driver>.clean The command which is used to convert the content of a worktree file to a blob upon checkin. See gitattributes(5) for details. filter.<driver>.smudge The command which is used to convert the content of a blob object to a worktree file upon checkout. See gitattributes(5) for details. fsck.<msg-id> During fsck git may find issues with legacy data which wouldnt be generated by current versions of git, and which wouldnt be sent over the wire if transfer.fsckObjects was set. This feature is intended to support working with legacy repositories containing such data. Setting fsck.<msg-id> will be picked up by git-fsck(1), but to accept pushes of such data set receive.fsck.<msg-id> instead, or to clone or fetch it set fetch.fsck.<msg-id>. The rest of the documentation discusses fsck.* for brevity, but the same applies for the corresponding receive.fsck.* and fetch.fsck.*. variables. Unlike variables like color.ui and core.editor, the receive.fsck.<msg-id> and fetch.fsck.<msg-id> variables will not fall back on the fsck.<msg-id> configuration if they arent set. To uniformly configure the same fsck settings in different circumstances, all three of them must be set to the same values. When fsck.<msg-id> is set, errors can be switched to warnings and vice versa by configuring the fsck.<msg-id> setting where the <msg-id> is the fsck message ID and the value is one of error, warn or ignore. For convenience, fsck prefixes the error/warning with the message ID, e.g. "missingEmail: invalid author/committer line - missing email" means that setting fsck.missingEmail = ignore will hide that issue. In general, it is better to enumerate existing objects with problems with fsck.skipList, instead of listing the kind of breakages these problematic objects share to be ignored, as doing the latter will allow new instances of the same breakages go unnoticed. Setting an unknown fsck.<msg-id> value will cause fsck to die, but doing the same for receive.fsck.<msg-id> and fetch.fsck.<msg-id> will only cause git to warn. See the Fsck Messages section of git-fsck(1) for supported values of <msg-id>. fsck.skipList The path to a list of object names (i.e. one unabbreviated SHA-1 per line) that are known to be broken in a non-fatal way and should be ignored. On versions of Git 2.20 and later, comments (#), empty lines, and any leading and trailing whitespace are ignored. Everything but a SHA-1 per line will error out on older versions. This feature is useful when an established project should be accepted despite early commits containing errors that can be safely ignored, such as invalid committer email addresses. Note: corrupt objects cannot be skipped with this setting. Like fsck.<msg-id> this variable has corresponding receive.fsck.skipList and fetch.fsck.skipList variants. Unlike variables like color.ui and core.editor the receive.fsck.skipList and fetch.fsck.skipList variables will not fall back on the fsck.skipList configuration if they arent set. To uniformly configure the same fsck settings in different circumstances, all three of them must be set to the same values. Older versions of Git (before 2.20) documented that the object names list should be sorted. This was never a requirement; the object names could appear in any order, but when reading the list we tracked whether the list was sorted for the purposes of an internal binary search implementation, which could save itself some work with an already sorted list. Unless you had a humongous list there was no reason to go out of your way to pre-sort the list. After Git version 2.20 a hash implementation is used instead, so theres now no reason to pre-sort the list. fsmonitor.allowRemote By default, the fsmonitor daemon refuses to work with network-mounted repositories. Setting fsmonitor.allowRemote to true overrides this behavior. Only respected when core.fsmonitor is set to true. fsmonitor.socketDir This Mac OS-specific option, if set, specifies the directory in which to create the Unix domain socket used for communication between the fsmonitor daemon and various Git commands. The directory must reside on a native Mac OS filesystem. Only respected when core.fsmonitor is set to true. gc.aggressiveDepth The depth parameter used in the delta compression algorithm used by git gc --aggressive. This defaults to 50, which is the default for the --depth option when --aggressive isnt in use. See the documentation for the --depth option in git-repack(1) for more details. gc.aggressiveWindow The window size parameter used in the delta compression algorithm used by git gc --aggressive. This defaults to 250, which is a much more aggressive window size than the default --window of 10. See the documentation for the --window option in git-repack(1) for more details. gc.auto When there are approximately more than this many loose objects in the repository, git gc --auto will pack them. Some Porcelain commands use this command to perform a light-weight garbage collection from time to time. The default value is 6700. Setting this to 0 disables not only automatic packing based on the number of loose objects, but also any other heuristic git gc --auto will otherwise use to determine if theres work to do, such as gc.autoPackLimit. gc.autoPackLimit When there are more than this many packs that are not marked with *.keep file in the repository, git gc --auto consolidates them into one larger pack. The default value is 50. Setting this to 0 disables it. Setting gc.auto to 0 will also disable this. See the gc.bigPackThreshold configuration variable below. When in use, itll affect how the auto pack limit works. gc.autoDetach Make git gc --auto return immediately and run in the background if the system supports it. Default is true. gc.bigPackThreshold If non-zero, all non-cruft packs larger than this limit are kept when git gc is run. This is very similar to --keep-largest-pack except that all non-cruft packs that meet the threshold are kept, not just the largest pack. Defaults to zero. Common unit suffixes of k, m, or g are supported. Note that if the number of kept packs is more than gc.autoPackLimit, this configuration variable is ignored, all packs except the base pack will be repacked. After this the number of packs should go below gc.autoPackLimit and gc.bigPackThreshold should be respected again. If the amount of memory estimated for git repack to run smoothly is not available and gc.bigPackThreshold is not set, the largest pack will also be excluded (this is the equivalent of running git gc with --keep-largest-pack). gc.writeCommitGraph If true, then gc will rewrite the commit-graph file when git-gc(1) is run. When using git gc --auto the commit-graph will be updated if housekeeping is required. Default is true. See git-commit-graph(1) for details. gc.logExpiry If the file gc.log exists, then git gc --auto will print its content and exit with status zero instead of running unless that file is more than gc.logExpiry old. Default is "1.day". See gc.pruneExpire for more ways to specify its value. gc.packRefs Running git pack-refs in a repository renders it unclonable by Git versions prior to 1.5.1.2 over dumb transports such as HTTP. This variable determines whether git gc runs git pack-refs. This can be set to notbare to enable it within all non-bare repos or it can be set to a boolean value. The default is true. gc.cruftPacks Store unreachable objects in a cruft pack (see git-repack(1)) instead of as loose objects. The default is true. gc.maxCruftSize Limit the size of new cruft packs when repacking. When specified in addition to --max-cruft-size, the command line option takes priority. See the --max-cruft-size option of git-repack(1). gc.pruneExpire When git gc is run, it will call prune --expire 2.weeks.ago (and repack --cruft --cruft-expiration 2.weeks.ago if using cruft packs via gc.cruftPacks or --cruft). Override the grace period with this config variable. The value "now" may be used to disable this grace period and always prune unreachable objects immediately, or "never" may be used to suppress pruning. This feature helps prevent corruption when git gc runs concurrently with another process writing to the repository; see the "NOTES" section of git-gc(1). gc.worktreePruneExpire When git gc is run, it calls git worktree prune --expire 3.months.ago. This config variable can be used to set a different grace period. The value "now" may be used to disable the grace period and prune $GIT_DIR/worktrees immediately, or "never" may be used to suppress pruning. gc.reflogExpire, gc.<pattern>.reflogExpire git reflog expire removes reflog entries older than this time; defaults to 90 days. The value "now" expires all entries immediately, and "never" suppresses expiration altogether. With "<pattern>" (e.g. "refs/stash") in the middle the setting applies only to the refs that match the <pattern>. gc.reflogExpireUnreachable, gc.<pattern>.reflogExpireUnreachable git reflog expire removes reflog entries older than this time and are not reachable from the current tip; defaults to 30 days. The value "now" expires all entries immediately, and "never" suppresses expiration altogether. With "<pattern>" (e.g. "refs/stash") in the middle, the setting applies only to the refs that match the <pattern>. These types of entries are generally created as a result of using git commit --amend or git rebase and are the commits prior to the amend or rebase occurring. Since these changes are not part of the current project most users will want to expire them sooner, which is why the default is more aggressive than gc.reflogExpire. gc.recentObjectsHook When considering whether or not to remove an object (either when generating a cruft pack or storing unreachable objects as loose), use the shell to execute the specified command(s). Interpret their output as object IDs which Git will consider as "recent", regardless of their age. By treating their mtimes as "now", any objects (and their descendants) mentioned in the output will be kept regardless of their true age. Output must contain exactly one hex object ID per line, and nothing else. Objects which cannot be found in the repository are ignored. Multiple hooks are supported, but all must exit successfully, else the operation (either generating a cruft pack or unpacking unreachable objects) will be halted. gc.repackFilter When repacking, use the specified filter to move certain objects into a separate packfile. See the --filter=<filter-spec> option of git-repack(1). gc.repackFilterTo When repacking and using a filter, see gc.repackFilter, the specified location will be used to create the packfile containing the filtered out objects. WARNING: The specified location should be accessible, using for example the Git alternates mechanism, otherwise the repo could be considered corrupt by Git as it migh not be able to access the objects in that packfile. See the --filter-to=<dir> option of git-repack(1) and the objects/info/alternates section of gitrepository-layout(5). gc.rerereResolved Records of conflicted merge you resolved earlier are kept for this many days when git rerere gc is run. You can also use more human-readable "1.month.ago", etc. The default is 60 days. See git-rerere(1). gc.rerereUnresolved Records of conflicted merge you have not resolved are kept for this many days when git rerere gc is run. You can also use more human-readable "1.month.ago", etc. The default is 15 days. See git-rerere(1). gitcvs.commitMsgAnnotation Append this string to each commit message. Set to empty string to disable this feature. Defaults to "via git-CVS emulator". gitcvs.enabled Whether the CVS server interface is enabled for this repository. See git-cvsserver(1). gitcvs.logFile Path to a log file where the CVS server interface well... logs various stuff. See git-cvsserver(1). gitcvs.usecrlfattr If true, the server will look up the end-of-line conversion attributes for files to determine the -k modes to use. If the attributes force Git to treat a file as text, the -k mode will be left blank so CVS clients will treat it as text. If they suppress text conversion, the file will be set with -kb mode, which suppresses any newline munging the client might otherwise do. If the attributes do not allow the file type to be determined, then gitcvs.allBinary is used. See gitattributes(5). gitcvs.allBinary This is used if gitcvs.usecrlfattr does not resolve the correct -kb mode to use. If true, all unresolved files are sent to the client in mode -kb. This causes the client to treat them as binary files, which suppresses any newline munging it otherwise might do. Alternatively, if it is set to "guess", then the contents of the file are examined to decide if it is binary, similar to core.autocrlf. gitcvs.dbName Database used by git-cvsserver to cache revision information derived from the Git repository. The exact meaning depends on the used database driver, for SQLite (which is the default driver) this is a filename. Supports variable substitution (see git-cvsserver(1) for details). May not contain semicolons (;). Default: %Ggitcvs.%m.sqlite gitcvs.dbDriver Used Perl DBI driver. You can specify any available driver for this here, but it might not work. git-cvsserver is tested with DBD::SQLite, reported to work with DBD::Pg, and reported not to work with DBD::mysql. Experimental feature. May not contain double colons (:). Default: SQLite. See git-cvsserver(1). gitcvs.dbUser, gitcvs.dbPass Database user and password. Only useful if setting gitcvs.dbDriver, since SQLite has no concept of database users and/or passwords. gitcvs.dbUser supports variable substitution (see git-cvsserver(1) for details). gitcvs.dbTableNamePrefix Database table name prefix. Prepended to the names of any database tables used, allowing a single database to be used for several repositories. Supports variable substitution (see git-cvsserver(1) for details). Any non-alphabetic characters will be replaced with underscores. All gitcvs variables except for gitcvs.usecrlfattr and gitcvs.allBinary can also be specified as gitcvs.<access_method>.<varname> (where access_method is one of "ext" and "pserver") to make them apply only for the given access method. gitweb.category, gitweb.description, gitweb.owner, gitweb.url See gitweb(1) for description. gitweb.avatar, gitweb.blame, gitweb.grep, gitweb.highlight, gitweb.patches, gitweb.pickaxe, gitweb.remote_heads, gitweb.showSizes, gitweb.snapshot See gitweb.conf(5) for description. grep.lineNumber If set to true, enable -n option by default. grep.column If set to true, enable the --column option by default. grep.patternType Set the default matching behavior. Using a value of basic, extended, fixed, or perl will enable the --basic-regexp, --extended-regexp, --fixed-strings, or --perl-regexp option accordingly, while the value default will use the grep.extendedRegexp option to choose between basic and extended. grep.extendedRegexp If set to true, enable --extended-regexp option by default. This option is ignored when the grep.patternType option is set to a value other than default. grep.threads Number of grep worker threads to use. If unset (or set to 0), Git will use as many threads as the number of logical cores available. grep.fullName If set to true, enable --full-name option by default. grep.fallbackToNoIndex If set to true, fall back to git grep --no-index if git grep is executed outside of a git repository. Defaults to false. gpg.program Use this custom program instead of "gpg" found on $PATH when making or verifying a PGP signature. The program must support the same command-line interface as GPG, namely, to verify a detached signature, "gpg --verify $signature - <$file" is run, and the program is expected to signal a good signature by exiting with code 0. To generate an ASCII-armored detached signature, the standard input of "gpg -bsau $key" is fed with the contents to be signed, and the program is expected to send the result to its standard output. gpg.format Specifies which key format to use when signing with --gpg-sign. Default is "openpgp". Other possible values are "x509", "ssh". See gitformat-signature(5) for the signature format, which differs based on the selected gpg.format. gpg.<format>.program Use this to customize the program used for the signing format you chose. (see gpg.program and gpg.format) gpg.program can still be used as a legacy synonym for gpg.openpgp.program. The default value for gpg.x509.program is "gpgsm" and gpg.ssh.program is "ssh-keygen". gpg.minTrustLevel Specifies a minimum trust level for signature verification. If this option is unset, then signature verification for merge operations requires a key with at least marginal trust. Other operations that perform signature verification require a key with at least undefined trust. Setting this option overrides the required trust-level for all operations. Supported values, in increasing order of significance: undefined never marginal fully ultimate gpg.ssh.defaultKeyCommand This command will be run when user.signingkey is not set and a ssh signature is requested. On successful exit a valid ssh public key prefixed with key:: is expected in the first line of its output. This allows for a script doing a dynamic lookup of the correct public key when it is impractical to statically configure user.signingKey. For example when keys or SSH Certificates are rotated frequently or selection of the right key depends on external factors unknown to git. gpg.ssh.allowedSignersFile A file containing ssh public keys which you are willing to trust. The file consists of one or more lines of principals followed by an ssh public key. e.g.: user1@example.com,user2@example.com ssh-rsa AAAAX1... See ssh-keygen(1) "ALLOWED SIGNERS" for details. The principal is only used to identify the key and is available when verifying a signature. SSH has no concept of trust levels like gpg does. To be able to differentiate between valid signatures and trusted signatures the trust level of a signature verification is set to fully when the public key is present in the allowedSignersFile. Otherwise the trust level is undefined and git verify-commit/tag will fail. This file can be set to a location outside of the repository and every developer maintains their own trust store. A central repository server could generate this file automatically from ssh keys with push access to verify the code against. In a corporate setting this file is probably generated at a global location from automation that already handles developer ssh keys. A repository that only allows signed commits can store the file in the repository itself using a path relative to the top-level of the working tree. This way only committers with an already valid key can add or change keys in the keyring. Since OpensSSH 8.8 this file allows specifying a key lifetime using valid-after & valid-before options. Git will mark signatures as valid if the signing key was valid at the time of the signatures creation. This allows users to change a signing key without invalidating all previously made signatures. Using a SSH CA key with the cert-authority option (see ssh-keygen(1) "CERTIFICATES") is also valid. gpg.ssh.revocationFile Either a SSH KRL or a list of revoked public keys (without the principal prefix). See ssh-keygen(1) for details. If a public key is found in this file then it will always be treated as having trust level "never" and signatures will show as invalid. gui.commitMsgWidth Defines how wide the commit message window is in the git-gui(1). "75" is the default. gui.diffContext Specifies how many context lines should be used in calls to diff made by the git-gui(1). The default is "5". gui.displayUntracked Determines if git-gui(1) shows untracked files in the file list. The default is "true". gui.encoding Specifies the default character encoding to use for displaying of file contents in git-gui(1) and gitk(1). It can be overridden by setting the encoding attribute for relevant files (see gitattributes(5)). If this option is not set, the tools default to the locale encoding. gui.matchTrackingBranch Determines if new branches created with git-gui(1) should default to tracking remote branches with matching names or not. Default: "false". gui.newBranchTemplate Is used as a suggested name when creating new branches using the git-gui(1). gui.pruneDuringFetch "true" if git-gui(1) should prune remote-tracking branches when performing a fetch. The default value is "false". gui.trustmtime Determines if git-gui(1) should trust the file modification timestamp or not. By default the timestamps are not trusted. gui.spellingDictionary Specifies the dictionary used for spell checking commit messages in the git-gui(1). When set to "none" spell checking is turned off. gui.fastCopyBlame If true, git gui blame uses -C instead of -C -C for original location detection. It makes blame significantly faster on huge repositories at the expense of less thorough copy detection. gui.copyBlameThreshold Specifies the threshold to use in git gui blame original location detection, measured in alphanumeric characters. See the git-blame(1) manual for more information on copy detection. gui.blamehistoryctx Specifies the radius of history context in days to show in gitk(1) for the selected commit, when the Show History Context menu item is invoked from git gui blame. If this variable is set to zero, the whole history is shown. guitool.<name>.cmd Specifies the shell command line to execute when the corresponding item of the git-gui(1) Tools menu is invoked. This option is mandatory for every tool. The command is executed from the root of the working directory, and in the environment it receives the name of the tool as GIT_GUITOOL, the name of the currently selected file as FILENAME, and the name of the current branch as CUR_BRANCH (if the head is detached, CUR_BRANCH is empty). guitool.<name>.needsFile Run the tool only if a diff is selected in the GUI. It guarantees that FILENAME is not empty. guitool.<name>.noConsole Run the command silently, without creating a window to display its output. guitool.<name>.noRescan Dont rescan the working directory for changes after the tool finishes execution. guitool.<name>.confirm Show a confirmation dialog before actually running the tool. guitool.<name>.argPrompt Request a string argument from the user, and pass it to the tool through the ARGS environment variable. Since requesting an argument implies confirmation, the confirm option has no effect if this is enabled. If the option is set to true, yes, or 1, the dialog uses a built-in generic prompt; otherwise the exact value of the variable is used. guitool.<name>.revPrompt Request a single valid revision from the user, and set the REVISION environment variable. In other aspects this option is similar to argPrompt, and can be used together with it. guitool.<name>.revUnmerged Show only unmerged branches in the revPrompt subdialog. This is useful for tools similar to merge or rebase, but not for things like checkout or reset. guitool.<name>.title Specifies the title to use for the prompt dialog. The default is the tool name. guitool.<name>.prompt Specifies the general prompt string to display at the top of the dialog, before subsections for argPrompt and revPrompt. The default value includes the actual command. help.browser Specify the browser that will be used to display help in the web format. See git-help(1). help.format Override the default help format used by git-help(1). Values man, info, web and html are supported. man is the default. web and html are the same. help.autoCorrect If git detects typos and can identify exactly one valid command similar to the error, git will try to suggest the correct command or even run the suggestion automatically. Possible config values are: 0 (default): show the suggested command. positive number: run the suggested command after specified deciseconds (0.1 sec). "immediate": run the suggested command immediately. "prompt": show the suggestion and prompt for confirmation to run the command. "never": dont run or show any suggested command. help.htmlPath Specify the path where the HTML documentation resides. File system paths and URLs are supported. HTML pages will be prefixed with this path when help is displayed in the web format. This defaults to the documentation path of your Git installation. http.proxy Override the HTTP proxy, normally configured using the http_proxy, https_proxy, and all_proxy environment variables (see curl(1)). In addition to the syntax understood by curl, it is possible to specify a proxy string with a user name but no password, in which case git will attempt to acquire one in the same way it does for other credentials. See gitcredentials(7) for more information. The syntax thus is [protocol://][user[:password]@]proxyhost[:port]. This can be overridden on a per-remote basis; see remote.<name>.proxy http.proxyAuthMethod Set the method with which to authenticate against the HTTP proxy. This only takes effect if the configured proxy string contains a user name part (i.e. is of the form user@host or user@host:port). This can be overridden on a per-remote basis; see remote.<name>.proxyAuthMethod. Both can be overridden by the GIT_HTTP_PROXY_AUTHMETHOD environment variable. Possible values are: anyauth - Automatically pick a suitable authentication method. It is assumed that the proxy answers an unauthenticated request with a 407 status code and one or more Proxy-authenticate headers with supported authentication methods. This is the default. basic - HTTP Basic authentication digest - HTTP Digest authentication; this prevents the password from being transmitted to the proxy in clear text negotiate - GSS-Negotiate authentication (compare the --negotiate option of curl(1)) ntlm - NTLM authentication (compare the --ntlm option of curl(1)) http.proxySSLCert The pathname of a file that stores a client certificate to use to authenticate with an HTTPS proxy. Can be overridden by the GIT_PROXY_SSL_CERT environment variable. http.proxySSLKey The pathname of a file that stores a private key to use to authenticate with an HTTPS proxy. Can be overridden by the GIT_PROXY_SSL_KEY environment variable. http.proxySSLCertPasswordProtected Enable Gits password prompt for the proxy SSL certificate. Otherwise OpenSSL will prompt the user, possibly many times, if the certificate or private key is encrypted. Can be overridden by the GIT_PROXY_SSL_CERT_PASSWORD_PROTECTED environment variable. http.proxySSLCAInfo Pathname to the file containing the certificate bundle that should be used to verify the proxy with when using an HTTPS proxy. Can be overridden by the GIT_PROXY_SSL_CAINFO environment variable. http.emptyAuth Attempt authentication without seeking a username or password. This can be used to attempt GSS-Negotiate authentication without specifying a username in the URL, as libcurl normally requires a username for authentication. http.delegation Control GSSAPI credential delegation. The delegation is disabled by default in libcurl since version 7.21.7. Set parameter to tell the server what it is allowed to delegate when it comes to user credentials. Used with GSS/kerberos. Possible values are: none - Dont allow any delegation. policy - Delegates if and only if the OK-AS-DELEGATE flag is set in the Kerberos service ticket, which is a matter of realm policy. always - Unconditionally allow the server to delegate. http.extraHeader Pass an additional HTTP header when communicating with a server. If more than one such entry exists, all of them are added as extra headers. To allow overriding the settings inherited from the system config, an empty value will reset the extra headers to the empty list. http.cookieFile The pathname of a file containing previously stored cookie lines, which should be used in the Git http session, if they match the server. The file format of the file to read cookies from should be plain HTTP headers or the Netscape/Mozilla cookie file format (see curl(1)). NOTE that the file specified with http.cookieFile is used only as input unless http.saveCookies is set. http.saveCookies If set, store cookies received during requests to the file specified by http.cookieFile. Has no effect if http.cookieFile is unset. http.version Use the specified HTTP protocol version when communicating with a server. If you want to force the default. The available and default version depend on libcurl. Currently the possible values of this option are: HTTP/2 HTTP/1.1 http.curloptResolve Hostname resolution information that will be used first by libcurl when sending HTTP requests. This information should be in one of the following formats: [+]HOST:PORT:ADDRESS[,ADDRESS] -HOST:PORT The first format redirects all requests to the given HOST:PORT to the provided ADDRESS(s). The second format clears all previous config values for that HOST:PORT combination. To allow easy overriding of all the settings inherited from the system config, an empty value will reset all resolution information to the empty list. http.sslVersion The SSL version to use when negotiating an SSL connection, if you want to force the default. The available and default version depend on whether libcurl was built against NSS or OpenSSL and the particular configuration of the crypto library in use. Internally this sets the CURLOPT_SSL_VERSION option; see the libcurl documentation for more details on the format of this option and for the ssl version supported. Currently the possible values of this option are: sslv2 sslv3 tlsv1 tlsv1.0 tlsv1.1 tlsv1.2 tlsv1.3 Can be overridden by the GIT_SSL_VERSION environment variable. To force git to use libcurls default ssl version and ignore any explicit http.sslversion option, set GIT_SSL_VERSION to the empty string. http.sslCipherList A list of SSL ciphers to use when negotiating an SSL connection. The available ciphers depend on whether libcurl was built against NSS or OpenSSL and the particular configuration of the crypto library in use. Internally this sets the CURLOPT_SSL_CIPHER_LIST option; see the libcurl documentation for more details on the format of this list. Can be overridden by the GIT_SSL_CIPHER_LIST environment variable. To force git to use libcurls default cipher list and ignore any explicit http.sslCipherList option, set GIT_SSL_CIPHER_LIST to the empty string. http.sslVerify Whether to verify the SSL certificate when fetching or pushing over HTTPS. Defaults to true. Can be overridden by the GIT_SSL_NO_VERIFY environment variable. http.sslCert File containing the SSL certificate when fetching or pushing over HTTPS. Can be overridden by the GIT_SSL_CERT environment variable. http.sslKey File containing the SSL private key when fetching or pushing over HTTPS. Can be overridden by the GIT_SSL_KEY environment variable. http.sslCertPasswordProtected Enable Gits password prompt for the SSL certificate. Otherwise OpenSSL will prompt the user, possibly many times, if the certificate or private key is encrypted. Can be overridden by the GIT_SSL_CERT_PASSWORD_PROTECTED environment variable. http.sslCAInfo File containing the certificates to verify the peer with when fetching or pushing over HTTPS. Can be overridden by the GIT_SSL_CAINFO environment variable. http.sslCAPath Path containing files with the CA certificates to verify the peer with when fetching or pushing over HTTPS. Can be overridden by the GIT_SSL_CAPATH environment variable. http.sslBackend Name of the SSL backend to use (e.g. "openssl" or "schannel"). This option is ignored if cURL lacks support for choosing the SSL backend at runtime. http.schannelCheckRevoke Used to enforce or disable certificate revocation checks in cURL when http.sslBackend is set to "schannel". Defaults to true if unset. Only necessary to disable this if Git consistently errors and the message is about checking the revocation status of a certificate. This option is ignored if cURL lacks support for setting the relevant SSL option at runtime. http.schannelUseSSLCAInfo As of cURL v7.60.0, the Secure Channel backend can use the certificate bundle provided via http.sslCAInfo, but that would override the Windows Certificate Store. Since this is not desirable by default, Git will tell cURL not to use that bundle by default when the schannel backend was configured via http.sslBackend, unless http.schannelUseSSLCAInfo overrides this behavior. http.pinnedPubkey Public key of the https service. It may either be the filename of a PEM or DER encoded public key file or a string starting with sha256// followed by the base64 encoded sha256 hash of the public key. See also libcurl CURLOPT_PINNEDPUBLICKEY. git will exit with an error if this option is set but not supported by cURL. http.sslTry Attempt to use AUTH SSL/TLS and encrypted data transfers when connecting via regular FTP protocol. This might be needed if the FTP server requires it for security reasons or you wish to connect securely whenever remote FTP server supports it. Default is false since it might trigger certificate verification errors on misconfigured servers. http.maxRequests How many HTTP requests to launch in parallel. Can be overridden by the GIT_HTTP_MAX_REQUESTS environment variable. Default is 5. http.minSessions The number of curl sessions (counted across slots) to be kept across requests. They will not be ended with curl_easy_cleanup() until http_cleanup() is invoked. If USE_CURL_MULTI is not defined, this value will be capped at 1. Defaults to 1. http.postBuffer Maximum size in bytes of the buffer used by smart HTTP transports when POSTing data to the remote system. For requests larger than this buffer size, HTTP/1.1 and Transfer-Encoding: chunked is used to avoid creating a massive pack file locally. Default is 1 MiB, which is sufficient for most requests. Note that raising this limit is only effective for disabling chunked transfer encoding and therefore should be used only where the remote server or a proxy only supports HTTP/1.0 or is noncompliant with the HTTP standard. Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes. http.lowSpeedLimit, http.lowSpeedTime If the HTTP transfer speed, in bytes per second, is less than http.lowSpeedLimit for longer than http.lowSpeedTime seconds, the transfer is aborted. Can be overridden by the GIT_HTTP_LOW_SPEED_LIMIT and GIT_HTTP_LOW_SPEED_TIME environment variables. http.noEPSV A boolean which disables using of EPSV ftp command by curl. This can be helpful with some "poor" ftp servers which dont support EPSV mode. Can be overridden by the GIT_CURL_FTP_NO_EPSV environment variable. Default is false (curl will use EPSV). http.userAgent The HTTP USER_AGENT string presented to an HTTP server. The default value represents the version of the Git client such as git/1.7.1. This option allows you to override this value to a more common value such as Mozilla/4.0. This may be necessary, for instance, if connecting through a firewall that restricts HTTP connections to a set of common USER_AGENT strings (but not including those like git/1.7.1). Can be overridden by the GIT_HTTP_USER_AGENT environment variable. http.followRedirects Whether git should follow HTTP redirects. If set to true, git will transparently follow any redirect issued by a server it encounters. If set to false, git will treat all redirects as errors. If set to initial, git will follow redirects only for the initial request to a remote, but not for subsequent follow-up HTTP requests. Since git uses the redirected URL as the base for the follow-up requests, this is generally sufficient. The default is initial. http.<url>.* Any of the http.* options above can be applied selectively to some URLs. For a config key to match a URL, each element of the config key is compared to that of the URL, in the following order: 1. Scheme (e.g., https in https://example.com/ ). This field must match exactly between the config key and the URL. 2. Host/domain name (e.g., example.com in https://example.com/ ). This field must match between the config key and the URL. It is possible to specify a * as part of the host name to match all subdomains at this level. https://*.example.com/ for example would match https://foo.example.com/ , but not https://foo.bar.example.com/ . 3. Port number (e.g., 8080 in http://example.com:8080/ ). This field must match exactly between the config key and the URL. Omitted port numbers are automatically converted to the correct default for the scheme before matching. 4. Path (e.g., repo.git in https://example.com/repo.git ). The path field of the config key must match the path field of the URL either exactly or as a prefix of slash-delimited path elements. This means a config key with path foo/ matches URL path foo/bar. A prefix can only match on a slash (/) boundary. Longer matches take precedence (so a config key with path foo/bar is a better match to URL path foo/bar than a config key with just path foo/). 5. User name (e.g., user in https://user@example.com/repo.git). If the config key has a user name it must match the user name in the URL exactly. If the config key does not have a user name, that config key will match a URL with any user name (including none), but at a lower precedence than a config key with a user name. The list above is ordered by decreasing precedence; a URL that matches a config keys path is preferred to one that matches its user name. For example, if the URL is https://user@example.com/foo/bar a config key match of https://example.com/foo will be preferred over a config key match of https://user@example.com. All URLs are normalized before attempting any matching (the password part, if embedded in the URL, is always ignored for matching purposes) so that equivalent URLs that are simply spelled differently will match properly. Environment variable settings always override any matches. The URLs that are matched against are those given directly to Git commands. This means any URLs visited as a result of a redirection do not participate in matching. i18n.commitEncoding Character encoding the commit messages are stored in; Git itself does not care per se, but this information is necessary e.g. when importing commits from emails or in the gitk graphical history browser (and possibly in other places in the future or in other porcelains). See e.g. git-mailinfo(1). Defaults to utf-8. i18n.logOutputEncoding Character encoding the commit messages are converted to when running git log and friends. imap.folder The folder to drop the mails into, which is typically the Drafts folder. For example: "INBOX.Drafts", "INBOX/Drafts" or "[Gmail]/Drafts". Required. imap.tunnel Command used to set up a tunnel to the IMAP server through which commands will be piped instead of using a direct network connection to the server. Required when imap.host is not set. imap.host A URL identifying the server. Use an imap:// prefix for non-secure connections and an imaps:// prefix for secure connections. Ignored when imap.tunnel is set, but required otherwise. imap.user The username to use when logging in to the server. imap.pass The password to use when logging in to the server. imap.port An integer port number to connect to on the server. Defaults to 143 for imap:// hosts and 993 for imaps:// hosts. Ignored when imap.tunnel is set. imap.sslverify A boolean to enable/disable verification of the server certificate used by the SSL/TLS connection. Default is true. Ignored when imap.tunnel is set. imap.preformattedHTML A boolean to enable/disable the use of html encoding when sending a patch. An html encoded patch will be bracketed with <pre> and have a content type of text/html. Ironically, enabling this option causes Thunderbird to send the patch as a plain/text, format=fixed email. Default is false. imap.authMethod Specify the authentication method for authenticating with the IMAP server. If Git was built with the NO_CURL option, or if your curl version is older than 7.34.0, or if youre running git-imap-send with the --no-curl option, the only supported method is CRAM-MD5. If this is not set then git imap-send uses the basic IMAP plaintext LOGIN command. include.path, includeIf.<condition>.path Special variables to include other configuration files. See the "CONFIGURATION FILE" section in the main git-config(1) documentation, specifically the "Includes" and "Conditional Includes" subsections. index.recordEndOfIndexEntries Specifies whether the index file should include an "End Of Index Entry" section. This reduces index load time on multiprocessor machines but produces a message "ignoring EOIE extension" when reading the index using Git versions before 2.20. Defaults to true if index.threads has been explicitly enabled, false otherwise. index.recordOffsetTable Specifies whether the index file should include an "Index Entry Offset Table" section. This reduces index load time on multiprocessor machines but produces a message "ignoring IEOT extension" when reading the index using Git versions before 2.20. Defaults to true if index.threads has been explicitly enabled, false otherwise. index.sparse When enabled, write the index using sparse-directory entries. This has no effect unless core.sparseCheckout and core.sparseCheckoutCone are both enabled. Defaults to false. index.threads Specifies the number of threads to spawn when loading the index. This is meant to reduce index load time on multiprocessor machines. Specifying 0 or true will cause Git to auto-detect the number of CPUs and set the number of threads accordingly. Specifying 1 or false will disable multithreading. Defaults to true. index.version Specify the version with which new index files should be initialized. This does not affect existing repositories. If feature.manyFiles is enabled, then the default is 4. index.skipHash When enabled, do not compute the trailing hash for the index file. This accelerates Git commands that manipulate the index, such as git add, git commit, or git status. Instead of storing the checksum, write a trailing set of bytes with value zero, indicating that the computation was skipped. If you enable index.skipHash, then Git clients older than 2.13.0 will refuse to parse the index and Git clients older than 2.40.0 will report an error during git fsck. init.templateDir Specify the directory from which templates will be copied. (See the "TEMPLATE DIRECTORY" section of git-init(1).) init.defaultBranch Allows overriding the default branch name e.g. when initializing a new repository. instaweb.browser Specify the program that will be used to browse your working repository in gitweb. See git-instaweb(1). instaweb.httpd The HTTP daemon command-line to start gitweb on your working repository. See git-instaweb(1). instaweb.local If true the web server started by git-instaweb(1) will be bound to the local IP (127.0.0.1). instaweb.modulePath The default module path for git-instaweb(1) to use instead of /usr/lib/apache2/modules. Only used if httpd is Apache. instaweb.port The port number to bind the gitweb httpd to. See git-instaweb(1). interactive.singleKey In interactive commands, allow the user to provide one-letter input with a single key (i.e., without hitting enter). Currently this is used by the --patch mode of git-add(1), git-checkout(1), git-restore(1), git-commit(1), git-reset(1), and git-stash(1). Note that this setting is silently ignored if portable keystroke input is not available; requires the Perl module Term::ReadKey. interactive.diffFilter When an interactive command (such as git add --patch) shows a colorized diff, git will pipe the diff through the shell command defined by this configuration variable. The command may mark up the diff further for human consumption, provided that it retains a one-to-one correspondence with the lines in the original diff. Defaults to disabled (no filtering). log.abbrevCommit If true, makes git-log(1), git-show(1), and git-whatchanged(1) assume --abbrev-commit. You may override this option with --no-abbrev-commit. log.date Set the default date-time mode for the log command. Setting a value for log.date is similar to using git log's --date option. See git-log(1) for details. If the format is set to "auto:foo" and the pager is in use, format "foo" will be used for the date format. Otherwise, "default" will be used. log.decorate Print out the ref names of any commits that are shown by the log command. If short is specified, the ref name prefixes refs/heads/, refs/tags/ and refs/remotes/ will not be printed. If full is specified, the full ref name (including prefix) will be printed. If auto is specified, then if the output is going to a terminal, the ref names are shown as if short were given, otherwise no ref names are shown. This is the same as the --decorate option of the git log. log.initialDecorationSet By default, git log only shows decorations for certain known ref namespaces. If all is specified, then show all refs as decorations. log.excludeDecoration Exclude the specified patterns from the log decorations. This is similar to the --decorate-refs-exclude command-line option, but the config option can be overridden by the --decorate-refs option. log.diffMerges Set diff format to be used when --diff-merges=on is specified, see --diff-merges in git-log(1) for details. Defaults to separate. log.follow If true, git log will act as if the --follow option was used when a single <path> is given. This has the same limitations as --follow, i.e. it cannot be used to follow multiple files and does not work well on non-linear history. log.graphColors A list of colors, separated by commas, that can be used to draw history lines in git log --graph. log.showRoot If true, the initial commit will be shown as a big creation event. This is equivalent to a diff against an empty tree. Tools like git-log(1) or git-whatchanged(1), which normally hide the root commit will now show it. True by default. log.showSignature If true, makes git-log(1), git-show(1), and git-whatchanged(1) assume --show-signature. log.mailmap If true, makes git-log(1), git-show(1), and git-whatchanged(1) assume --use-mailmap, otherwise assume --no-use-mailmap. True by default. lsrefs.unborn May be "advertise" (the default), "allow", or "ignore". If "advertise", the server will respond to the client sending "unborn" (as described in gitprotocol-v2(5)) and will advertise support for this feature during the protocol v2 capability advertisement. "allow" is the same as "advertise" except that the server will not advertise support for this feature; this is useful for load-balanced servers that cannot be updated atomically (for example), since the administrator could configure "allow", then after a delay, configure "advertise". mailinfo.scissors If true, makes git-mailinfo(1) (and therefore git-am(1)) act by default as if the --scissors option was provided on the command-line. When active, this feature removes everything from the message body before a scissors line (i.e. consisting mainly of ">8", "8<" and "-"). mailmap.file The location of an augmenting mailmap file. The default mailmap, located in the root of the repository, is loaded first, then the mailmap file pointed to by this variable. The location of the mailmap file may be in a repository subdirectory, or somewhere outside of the repository itself. See git-shortlog(1) and git-blame(1). mailmap.blob Like mailmap.file, but consider the value as a reference to a blob in the repository. If both mailmap.file and mailmap.blob are given, both are parsed, with entries from mailmap.file taking precedence. In a bare repository, this defaults to HEAD:.mailmap. In a non-bare repository, it defaults to empty. maintenance.auto This boolean config option controls whether some commands run git maintenance run --auto after doing their normal work. Defaults to true. maintenance.strategy This string config option provides a way to specify one of a few recommended schedules for background maintenance. This only affects which tasks are run during git maintenance run --schedule=X commands, provided no --task=<task> arguments are provided. Further, if a maintenance.<task>.schedule config value is set, then that value is used instead of the one provided by maintenance.strategy. The possible strategy strings are: none: This default setting implies no tasks are run at any schedule. incremental: This setting optimizes for performing small maintenance activities that do not delete any data. This does not schedule the gc task, but runs the prefetch and commit-graph tasks hourly, the loose-objects and incremental-repack tasks daily, and the pack-refs task weekly. maintenance.<task>.enabled This boolean config option controls whether the maintenance task with name <task> is run when no --task option is specified to git maintenance run. These config values are ignored if a --task option exists. By default, only maintenance.gc.enabled is true. maintenance.<task>.schedule This config option controls whether or not the given <task> runs during a git maintenance run --schedule=<frequency> command. The value must be one of "hourly", "daily", or "weekly". maintenance.commit-graph.auto This integer config option controls how often the commit-graph task should be run as part of git maintenance run --auto. If zero, then the commit-graph task will not run with the --auto option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of reachable commits that are not in the commit-graph file is at least the value of maintenance.commit-graph.auto. The default value is 100. maintenance.loose-objects.auto This integer config option controls how often the loose-objects task should be run as part of git maintenance run --auto. If zero, then the loose-objects task will not run with the --auto option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of loose objects is at least the value of maintenance.loose-objects.auto. The default value is 100. maintenance.incremental-repack.auto This integer config option controls how often the incremental-repack task should be run as part of git maintenance run --auto. If zero, then the incremental-repack task will not run with the --auto option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of pack-files not in the multi-pack-index is at least the value of maintenance.incremental-repack.auto. The default value is 10. man.viewer Specify the programs that may be used to display help in the man format. See git-help(1). man.<tool>.cmd Specify the command to invoke the specified man viewer. The specified command is evaluated in shell with the man page passed as an argument. (See git-help(1).) man.<tool>.path Override the path for the given tool that may be used to display help in the man format. See git-help(1). merge.conflictStyle Specify the style in which conflicted hunks are written out to working tree files upon merge. The default is "merge", which shows a <<<<<<< conflict marker, changes made by one side, a ======= marker, changes made by the other side, and then a >>>>>>> marker. An alternate style, "diff3", adds a ||||||| marker and the original text before the ======= marker. The "merge" style tends to produce smaller conflict regions than diff3, both because of the exclusion of the original text, and because when a subset of lines match on the two sides, they are just pulled out of the conflict region. Another alternate style, "zdiff3", is similar to diff3 but removes matching lines on the two sides from the conflict region when those matching lines appear near either the beginning or end of a conflict region. merge.defaultToUpstream If merge is called without any commit argument, merge the upstream branches configured for the current branch by using their last observed values stored in their remote-tracking branches. The values of the branch.<current branch>.merge that name the branches at the remote named by branch.<current branch>.remote are consulted, and then they are mapped via remote.<remote>.fetch to their corresponding remote-tracking branches, and the tips of these tracking branches are merged. Defaults to true. merge.ff By default, Git does not create an extra merge commit when merging a commit that is a descendant of the current commit. Instead, the tip of the current branch is fast-forwarded. When set to false, this variable tells Git to create an extra merge commit in such a case (equivalent to giving the --no-ff option from the command line). When set to only, only such fast-forward merges are allowed (equivalent to giving the --ff-only option from the command line). merge.verifySignatures If true, this is equivalent to the --verify-signatures command line option. See git-merge(1) for details. merge.branchdesc In addition to branch names, populate the log message with the branch description text associated with them. Defaults to false. merge.log In addition to branch names, populate the log message with at most the specified number of one-line descriptions from the actual commits that are being merged. Defaults to false, and true is a synonym for 20. merge.suppressDest By adding a glob that matches the names of integration branches to this multi-valued configuration variable, the default merge message computed for merges into these integration branches will omit "into <branch name>" from its title. An element with an empty value can be used to clear the list of globs accumulated from previous configuration entries. When there is no merge.suppressDest variable defined, the default value of master is used for backward compatibility. merge.renameLimit The number of files to consider in the exhaustive portion of rename detection during a merge. If not specified, defaults to the value of diff.renameLimit. If neither merge.renameLimit nor diff.renameLimit are specified, currently defaults to 7000. This setting has no effect if rename detection is turned off. merge.renames Whether Git detects renames. If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. Defaults to the value of diff.renames. merge.directoryRenames Whether Git detects directory renames, affecting what happens at merge time to new files added to a directory on one side of history when that directory was renamed on the other side of history. If merge.directoryRenames is set to "false", directory rename detection is disabled, meaning that such new files will be left behind in the old directory. If set to "true", directory rename detection is enabled, meaning that such new files will be moved into the new directory. If set to "conflict", a conflict will be reported for such paths. If merge.renames is false, merge.directoryRenames is ignored and treated as false. Defaults to "conflict". merge.renormalize Tell Git that canonical representation of files in the repository has changed over time (e.g. earlier commits record text files with CRLF line endings, but recent ones use LF line endings). In such a repository, Git can convert the data recorded in commits to a canonical form before performing a merge to reduce unnecessary conflicts. For more information, see section "Merging branches with differing checkin/checkout attributes" in gitattributes(5). merge.stat Whether to print the diffstat between ORIG_HEAD and the merge result at the end of the merge. True by default. merge.autoStash When set to true, automatically create a temporary stash entry before the operation begins, and apply it after the operation ends. This means that you can run merge on a dirty worktree. However, use with care: the final stash application after a successful merge might result in non-trivial conflicts. This option can be overridden by the --no-autostash and --autostash options of git-merge(1). Defaults to false. merge.tool Controls which merge tool is used by git-mergetool(1). The list below shows the valid built-in values. Any other value is treated as a custom merge tool and requires that a corresponding mergetool.<tool>.cmd variable is defined. merge.guitool Controls which merge tool is used by git-mergetool(1) when the -g/--gui flag is specified. The list below shows the valid built-in values. Any other value is treated as a custom merge tool and requires that a corresponding mergetool.<guitool>.cmd variable is defined. araxis Use Araxis Merge (requires a graphical session) bc Use Beyond Compare (requires a graphical session) bc3 Use Beyond Compare (requires a graphical session) bc4 Use Beyond Compare (requires a graphical session) codecompare Use Code Compare (requires a graphical session) deltawalker Use DeltaWalker (requires a graphical session) diffmerge Use DiffMerge (requires a graphical session) diffuse Use Diffuse (requires a graphical session) ecmerge Use ECMerge (requires a graphical session) emerge Use Emacs' Emerge examdiff Use ExamDiff Pro (requires a graphical session) guiffy Use Guiffys Diff Tool (requires a graphical session) gvimdiff Use gVim (requires a graphical session) with a custom layout (see git help mergetool's BACKEND SPECIFIC HINTS section) gvimdiff1 Use gVim (requires a graphical session) with a 2 panes layout (LOCAL and REMOTE) gvimdiff2 Use gVim (requires a graphical session) with a 3 panes layout (LOCAL, MERGED and REMOTE) gvimdiff3 Use gVim (requires a graphical session) where only the MERGED file is shown kdiff3 Use KDiff3 (requires a graphical session) meld Use Meld (requires a graphical session) with optional auto merge (see git help mergetool's CONFIGURATION section) nvimdiff Use Neovim with a custom layout (see git help mergetool's BACKEND SPECIFIC HINTS section) nvimdiff1 Use Neovim with a 2 panes layout (LOCAL and REMOTE) nvimdiff2 Use Neovim with a 3 panes layout (LOCAL, MERGED and REMOTE) nvimdiff3 Use Neovim where only the MERGED file is shown opendiff Use FileMerge (requires a graphical session) p4merge Use HelixCore P4Merge (requires a graphical session) smerge Use Sublime Merge (requires a graphical session) tkdiff Use TkDiff (requires a graphical session) tortoisemerge Use TortoiseMerge (requires a graphical session) vimdiff Use Vim with a custom layout (see git help mergetool's BACKEND SPECIFIC HINTS section) vimdiff1 Use Vim with a 2 panes layout (LOCAL and REMOTE) vimdiff2 Use Vim with a 3 panes layout (LOCAL, MERGED and REMOTE) vimdiff3 Use Vim where only the MERGED file is shown winmerge Use WinMerge (requires a graphical session) xxdiff Use xxdiff (requires a graphical session) merge.verbosity Controls the amount of output shown by the recursive merge strategy. Level 0 outputs nothing except a final error message if conflicts were detected. Level 1 outputs only conflicts, 2 outputs conflicts and file changes. Level 5 and above outputs debugging information. The default is level 2. Can be overridden by the GIT_MERGE_VERBOSITY environment variable. merge.<driver>.name Defines a human-readable name for a custom low-level merge driver. See gitattributes(5) for details. merge.<driver>.driver Defines the command that implements a custom low-level merge driver. See gitattributes(5) for details. merge.<driver>.recursive Names a low-level merge driver to be used when performing an internal merge between common ancestors. See gitattributes(5) for details. mergetool.<tool>.path Override the path for the given tool. This is useful in case your tool is not in the PATH. mergetool.<tool>.cmd Specify the command to invoke the specified merge tool. The specified command is evaluated in shell with the following variables available: BASE is the name of a temporary file containing the common base of the files to be merged, if available; LOCAL is the name of a temporary file containing the contents of the file on the current branch; REMOTE is the name of a temporary file containing the contents of the file from the branch being merged; MERGED contains the name of the file to which the merge tool should write the results of a successful merge. mergetool.<tool>.hideResolved Allows the user to override the global mergetool.hideResolved value for a specific tool. See mergetool.hideResolved for the full description. mergetool.<tool>.trustExitCode For a custom merge command, specify whether the exit code of the merge command can be used to determine whether the merge was successful. If this is not set to true then the merge target file timestamp is checked, and the merge is assumed to have been successful if the file has been updated; otherwise, the user is prompted to indicate the success of the merge. mergetool.meld.hasOutput Older versions of meld do not support the --output option. Git will attempt to detect whether meld supports --output by inspecting the output of meld --help. Configuring mergetool.meld.hasOutput will make Git skip these checks and use the configured value instead. Setting mergetool.meld.hasOutput to true tells Git to unconditionally use the --output option, and false avoids using --output. mergetool.meld.useAutoMerge When the --auto-merge is given, meld will merge all non-conflicting parts automatically, highlight the conflicting parts, and wait for user decision. Setting mergetool.meld.useAutoMerge to true tells Git to unconditionally use the --auto-merge option with meld. Setting this value to auto makes git detect whether --auto-merge is supported and will only use --auto-merge when available. A value of false avoids using --auto-merge altogether, and is the default value. mergetool.vimdiff.layout The vimdiff backend uses this variable to control how its split windows appear. Applies even if you are using Neovim (nvim) or gVim (gvim) as the merge tool. See BACKEND SPECIFIC HINTS section in git-mergetool(1). for details. mergetool.hideResolved During a merge, Git will automatically resolve as many conflicts as possible and write the MERGED file containing conflict markers around any conflicts that it cannot resolve; LOCAL and REMOTE normally represent the versions of the file from before Gits conflict resolution. This flag causes LOCAL and REMOTE to be overwritten so that only the unresolved conflicts are presented to the merge tool. Can be configured per-tool via the mergetool.<tool>.hideResolved configuration variable. Defaults to false. mergetool.keepBackup After performing a merge, the original file with conflict markers can be saved as a file with a .orig extension. If this variable is set to false then this file is not preserved. Defaults to true (i.e. keep the backup files). mergetool.keepTemporaries When invoking a custom merge tool, Git uses a set of temporary files to pass to the tool. If the tool returns an error and this variable is set to true, then these temporary files will be preserved; otherwise, they will be removed after the tool has exited. Defaults to false. mergetool.writeToTemp Git writes temporary BASE, LOCAL, and REMOTE versions of conflicting files in the worktree by default. Git will attempt to use a temporary directory for these files when set true. Defaults to false. mergetool.prompt Prompt before each invocation of the merge resolution program. mergetool.guiDefault Set true to use the merge.guitool by default (equivalent to specifying the --gui argument), or auto to select merge.guitool or merge.tool depending on the presence of a DISPLAY environment variable value. The default is false, where the --gui argument must be provided explicitly for the merge.guitool to be used. notes.mergeStrategy Which merge strategy to choose by default when resolving notes conflicts. Must be one of manual, ours, theirs, union, or cat_sort_uniq. Defaults to manual. See the "NOTES MERGE STRATEGIES" section of git-notes(1) for more information on each strategy. This setting can be overridden by passing the --strategy option to git-notes(1). notes.<name>.mergeStrategy Which merge strategy to choose when doing a notes merge into refs/notes/<name>. This overrides the more general "notes.mergeStrategy". See the "NOTES MERGE STRATEGIES" section in git-notes(1) for more information on the available strategies. notes.displayRef Which ref (or refs, if a glob or specified more than once), in addition to the default set by core.notesRef or GIT_NOTES_REF, to read notes from when showing commit messages with the git log family of commands. This setting can be overridden with the GIT_NOTES_DISPLAY_REF environment variable, which must be a colon separated list of refs or globs. A warning will be issued for refs that do not exist, but a glob that does not match any refs is silently ignored. This setting can be disabled by the --no-notes option to the git log family of commands, or by the --notes=<ref> option accepted by those commands. The effective value of "core.notesRef" (possibly overridden by GIT_NOTES_REF) is also implicitly added to the list of refs to be displayed. notes.rewrite.<command> When rewriting commits with <command> (currently amend or rebase), if this variable is false, git will not copy notes from the original to the rewritten commit. Defaults to true. See also "notes.rewriteRef" below. This setting can be overridden with the GIT_NOTES_REWRITE_REF environment variable, which must be a colon separated list of refs or globs. notes.rewriteMode When copying notes during a rewrite (see the "notes.rewrite.<command>" option), determines what to do if the target commit already has a note. Must be one of overwrite, concatenate, cat_sort_uniq, or ignore. Defaults to concatenate. This setting can be overridden with the GIT_NOTES_REWRITE_MODE environment variable. notes.rewriteRef When copying notes during a rewrite, specifies the (fully qualified) ref whose notes should be copied. May be a glob, in which case notes in all matching refs will be copied. You may also specify this configuration several times. Does not have a default value; you must configure this variable to enable note rewriting. Set it to refs/notes/commits to enable rewriting for the default commit notes. Can be overridden with the GIT_NOTES_REWRITE_REF environment variable. See notes.rewrite.<command> above for a further description of its format. pack.window The size of the window used by git-pack-objects(1) when no window size is given on the command line. Defaults to 10. pack.depth The maximum delta depth used by git-pack-objects(1) when no maximum depth is given on the command line. Defaults to 50. Maximum value is 4095. pack.windowMemory The maximum size of memory that is consumed by each thread in git-pack-objects(1) for pack window memory when no limit is given on the command line. The value can be suffixed with "k", "m", or "g". When left unconfigured (or set explicitly to 0), there will be no limit. pack.compression An integer -1..9, indicating the compression level for objects in a pack file. -1 is the zlib default. 0 means no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest. If not set, defaults to core.compression. If that is not set, defaults to -1, the zlib default, which is "a default compromise between speed and compression (currently equivalent to level 6)." Note that changing the compression level will not automatically recompress all existing objects. You can force recompression by passing the -F option to git-repack(1). pack.allowPackReuse When true, and when reachability bitmaps are enabled, pack-objects will try to send parts of the bitmapped packfile verbatim. This can reduce memory and CPU usage to serve fetches, but might result in sending a slightly larger pack. Defaults to true. pack.island An extended regular expression configuring a set of delta islands. See "DELTA ISLANDS" in git-pack-objects(1) for details. pack.islandCore Specify an island name which gets to have its objects be packed first. This creates a kind of pseudo-pack at the front of one pack, so that the objects from the specified island are hopefully faster to copy into any pack that should be served to a user requesting these objects. In practice this means that the island specified should likely correspond to what is the most commonly cloned in the repo. See also "DELTA ISLANDS" in git-pack-objects(1). pack.deltaCacheSize The maximum memory in bytes used for caching deltas in git-pack-objects(1) before writing them out to a pack. This cache is used to speed up the writing object phase by not having to recompute the final delta result once the best match for all objects is found. Repacking large repositories on machines which are tight with memory might be badly impacted by this though, especially if this cache pushes the system into swapping. A value of 0 means no limit. The smallest size of 1 byte may be used to virtually disable this cache. Defaults to 256 MiB. pack.deltaCacheLimit The maximum size of a delta, that is cached in git-pack-objects(1). This cache is used to speed up the writing object phase by not having to recompute the final delta result once the best match for all objects is found. Defaults to 1000. Maximum value is 65535. pack.threads Specifies the number of threads to spawn when searching for best delta matches. This requires that git-pack-objects(1) be compiled with pthreads otherwise this option is ignored with a warning. This is meant to reduce packing time on multiprocessor machines. The required amount of memory for the delta search window is however multiplied by the number of threads. Specifying 0 will cause Git to auto-detect the number of CPUs and set the number of threads accordingly. pack.indexVersion Specify the default pack index version. Valid values are 1 for legacy pack index used by Git versions prior to 1.5.2, and 2 for the new pack index with capabilities for packs larger than 4 GB as well as proper protection against the repacking of corrupted packs. Version 2 is the default. Note that version 2 is enforced and this config option is ignored whenever the corresponding pack is larger than 2 GB. If you have an old Git that does not understand the version 2 *.idx file, cloning or fetching over a non-native protocol (e.g. "http") that will copy both *.pack file and corresponding *.idx file from the other side may give you a repository that cannot be accessed with your older version of Git. If the *.pack file is smaller than 2 GB, however, you can use git-index-pack(1) on the *.pack file to regenerate the *.idx file. pack.packSizeLimit The maximum size of a pack. This setting only affects packing to a file when repacking, i.e. the git:// protocol is unaffected. It can be overridden by the --max-pack-size option of git-repack(1). Reaching this limit results in the creation of multiple packfiles. Note that this option is rarely useful, and may result in a larger total on-disk size (because Git will not store deltas between packs) and worse runtime performance (object lookup within multiple packs is slower than a single pack, and optimizations like reachability bitmaps cannot cope with multiple packs). If you need to actively run Git using smaller packfiles (e.g., because your filesystem does not support large files), this option may help. But if your goal is to transmit a packfile over a medium that supports limited sizes (e.g., removable media that cannot store the whole repository), you are likely better off creating a single large packfile and splitting it using a generic multi-volume archive tool (e.g., Unix split). The minimum size allowed is limited to 1 MiB. The default is unlimited. Common unit suffixes of k, m, or g are supported. pack.useBitmaps When true, git will use pack bitmaps (if available) when packing to stdout (e.g., during the server side of a fetch). Defaults to true. You should not generally need to turn this off unless you are debugging pack bitmaps. pack.useBitmapBoundaryTraversal When true, Git will use an experimental algorithm for computing reachability queries with bitmaps. Instead of building up complete bitmaps for all of the negated tips and then OR-ing them together, consider negated tips with existing bitmaps as additive (i.e. OR-ing them into the result if they exist, ignoring them otherwise), and build up a bitmap at the boundary instead. When using this algorithm, Git may include too many objects as a result of not opening up trees belonging to certain UNINTERESTING commits. This inexactness matches the non-bitmap traversal algorithm. In many cases, this can provide a speed-up over the exact algorithm, particularly when there is poor bitmap coverage of the negated side of the query. pack.useSparse When true, git will default to using the --sparse option in git pack-objects when the --revs option is present. This algorithm only walks trees that appear in paths that introduce new objects. This can have significant performance benefits when computing a pack to send a small change. However, it is possible that extra objects are added to the pack-file if the included commits contain certain types of direct renames. Default is true. pack.preferBitmapTips When selecting which commits will receive bitmaps, prefer a commit at the tip of any reference that is a suffix of any value of this configuration over any other commits in the "selection window". Note that setting this configuration to refs/foo does not mean that the commits at the tips of refs/foo/bar and refs/foo/baz will necessarily be selected. This is because commits are selected for bitmaps from within a series of windows of variable length. If a commit at the tip of any reference which is a suffix of any value of this configuration is seen in a window, it is immediately given preference over any other commit in that window. pack.writeBitmaps (deprecated) This is a deprecated synonym for repack.writeBitmaps. pack.writeBitmapHashCache When true, git will include a "hash cache" section in the bitmap index (if one is written). This cache can be used to feed gits delta heuristics, potentially leading to better deltas between bitmapped and non-bitmapped objects (e.g., when serving a fetch between an older, bitmapped pack and objects that have been pushed since the last gc). The downside is that it consumes 4 bytes per object of disk space. Defaults to true. When writing a multi-pack reachability bitmap, no new namehashes are computed; instead, any namehashes stored in an existing bitmap are permuted into their appropriate location when writing a new bitmap. pack.writeBitmapLookupTable When true, Git will include a "lookup table" section in the bitmap index (if one is written). This table is used to defer loading individual bitmaps as late as possible. This can be beneficial in repositories that have relatively large bitmap indexes. Defaults to false. pack.readReverseIndex When true, git will read any .rev file(s) that may be available (see: gitformat-pack(5)). When false, the reverse index will be generated from scratch and stored in memory. Defaults to true. pack.writeReverseIndex When true, git will write a corresponding .rev file (see: gitformat-pack(5)) for each new packfile that it writes in all places except for git-fast-import(1) and in the bulk checkin mechanism. Defaults to true. pager.<cmd> If the value is boolean, turns on or off pagination of the output of a particular Git subcommand when writing to a tty. Otherwise, turns on pagination for the subcommand using the pager specified by the value of pager.<cmd>. If --paginate or --no-pager is specified on the command line, it takes precedence over this option. To disable pagination for all commands, set core.pager or GIT_PAGER to cat. pretty.<name> Alias for a --pretty= format string, as specified in git-log(1). Any aliases defined here can be used just as the built-in pretty formats could. For example, running git config pretty.changelog "format:* %H %s" would cause the invocation git log --pretty=changelog to be equivalent to running git log "--pretty=format:* %H %s". Note that an alias with the same name as a built-in format will be silently ignored. protocol.allow If set, provide a user defined default policy for all protocols which dont explicitly have a policy (protocol.<name>.allow). By default, if unset, known-safe protocols (http, https, git, ssh) have a default policy of always, known-dangerous protocols (ext) have a default policy of never, and all other protocols (including file) have a default policy of user. Supported policies: always - protocol is always able to be used. never - protocol is never able to be used. user - protocol is only able to be used when GIT_PROTOCOL_FROM_USER is either unset or has a value of 1. This policy should be used when you want a protocol to be directly usable by the user but dont want it used by commands which execute clone/fetch/push commands without user input, e.g. recursive submodule initialization. protocol.<name>.allow Set a policy to be used by protocol <name> with clone/fetch/push commands. See protocol.allow above for the available policies. The protocol names currently used by git are: file: any local file-based path (including file:// URLs, or local paths) git: the anonymous git protocol over a direct TCP connection (or proxy, if configured) ssh: git over ssh (including host:path syntax, ssh://, etc). http: git over http, both "smart http" and "dumb http". Note that this does not include https; if you want to configure both, you must do so individually. any external helpers are named by their protocol (e.g., use hg to allow the git-remote-hg helper) protocol.version If set, clients will attempt to communicate with a server using the specified protocol version. If the server does not support it, communication falls back to version 0. If unset, the default is 2. Supported versions: 0 - the original wire protocol. 1 - the original wire protocol with the addition of a version string in the initial response from the server. 2 - Wire protocol version 2, see gitprotocol-v2(5). pull.ff By default, Git does not create an extra merge commit when merging a commit that is a descendant of the current commit. Instead, the tip of the current branch is fast-forwarded. When set to false, this variable tells Git to create an extra merge commit in such a case (equivalent to giving the --no-ff option from the command line). When set to only, only such fast-forward merges are allowed (equivalent to giving the --ff-only option from the command line). This setting overrides merge.ff when pulling. pull.rebase When true, rebase branches on top of the fetched branch, instead of merging the default branch from the default remote when "git pull" is run. See "branch.<name>.rebase" for setting this on a per-branch basis. When merges (or just m), pass the --rebase-merges option to git rebase so that the local merge commits are included in the rebase (see git-rebase(1) for details). When the value is interactive (or just i), the rebase is run in interactive mode. NOTE: this is a possibly dangerous operation; do not use it unless you understand the implications (see git-rebase(1) for details). pull.octopus The default merge strategy to use when pulling multiple branches at once. pull.twohead The default merge strategy to use when pulling a single branch. push.autoSetupRemote If set to "true" assume --set-upstream on default push when no upstream tracking exists for the current branch; this option takes effect with push.default options simple, upstream, and current. It is useful if by default you want new branches to be pushed to the default remote (like the behavior of push.default=current) and you also want the upstream tracking to be set. Workflows most likely to benefit from this option are simple central workflows where all branches are expected to have the same name on the remote. push.default Defines the action git push should take if no refspec is given (whether from the command-line, config, or elsewhere). Different values are well-suited for specific workflows; for instance, in a purely central workflow (i.e. the fetch source is equal to the push destination), upstream is probably what you want. Possible values are: nothing - do not push anything (error out) unless a refspec is given. This is primarily meant for people who want to avoid mistakes by always being explicit. current - push the current branch to update a branch with the same name on the receiving end. Works in both central and non-central workflows. upstream - push the current branch back to the branch whose changes are usually integrated into the current branch (which is called @{upstream}). This mode only makes sense if you are pushing to the same repository you would normally pull from (i.e. central workflow). tracking - This is a deprecated synonym for upstream. simple - push the current branch with the same name on the remote. If you are working on a centralized workflow (pushing to the same repository you pull from, which is typically origin), then you need to configure an upstream branch with the same name. This mode is the default since Git 2.0, and is the safest option suited for beginners. matching - push all branches having the same name on both ends. This makes the repository you are pushing to remember the set of branches that will be pushed out (e.g. if you always push maint and master there and no other branches, the repository you push to will have these two branches, and your local maint and master will be pushed there). To use this mode effectively, you have to make sure all the branches you would push out are ready to be pushed out before running git push, as the whole point of this mode is to allow you to push all of the branches in one go. If you usually finish work on only one branch and push out the result, while other branches are unfinished, this mode is not for you. Also this mode is not suitable for pushing into a shared central repository, as other people may add new branches there, or update the tip of existing branches outside your control. This used to be the default, but not since Git 2.0 (simple is the new default). push.followTags If set to true, enable --follow-tags option by default. You may override this configuration at time of push by specifying --no-follow-tags. push.gpgSign May be set to a boolean value, or the string if-asked. A true value causes all pushes to be GPG signed, as if --signed is passed to git-push(1). The string if-asked causes pushes to be signed if the server supports it, as if --signed=if-asked is passed to git push. A false value may override a value from a lower-priority config file. An explicit command-line flag always overrides this config option. push.pushOption When no --push-option=<option> argument is given from the command line, git push behaves as if each <value> of this variable is given as --push-option=<value>. This is a multi-valued variable, and an empty value can be used in a higher priority configuration file (e.g. .git/config in a repository) to clear the values inherited from a lower priority configuration files (e.g. $HOME/.gitconfig). Example: /etc/gitconfig push.pushoption = a push.pushoption = b ~/.gitconfig push.pushoption = c repo/.git/config push.pushoption = push.pushoption = b This will result in only b (a and c are cleared). push.recurseSubmodules May be "check", "on-demand", "only", or "no", with the same behavior as that of "push --recurse-submodules". If not set, no is used by default, unless submodule.recurse is set (in which case a true value means on-demand). push.useForceIfIncludes If set to "true", it is equivalent to specifying --force-if-includes as an option to git-push(1) in the command line. Adding --no-force-if-includes at the time of push overrides this configuration setting. push.negotiate If set to "true", attempt to reduce the size of the packfile sent by rounds of negotiation in which the client and the server attempt to find commits in common. If "false", Git will rely solely on the servers ref advertisement to find commits in common. push.useBitmaps If set to "false", disable use of bitmaps for "git push" even if pack.useBitmaps is "true", without preventing other git operations from using bitmaps. Default is true. rebase.backend Default backend to use for rebasing. Possible choices are apply or merge. In the future, if the merge backend gains all remaining capabilities of the apply backend, this setting may become unused. rebase.stat Whether to show a diffstat of what changed upstream since the last rebase. False by default. rebase.autoSquash If set to true, enable the --autosquash option of git-rebase(1) by default for interactive mode. This can be overridden with the --no-autosquash option. rebase.autoStash When set to true, automatically create a temporary stash entry before the operation begins, and apply it after the operation ends. This means that you can run rebase on a dirty worktree. However, use with care: the final stash application after a successful rebase might result in non-trivial conflicts. This option can be overridden by the --no-autostash and --autostash options of git-rebase(1). Defaults to false. rebase.updateRefs If set to true enable --update-refs option by default. rebase.missingCommitsCheck If set to "warn", git rebase -i will print a warning if some commits are removed (e.g. a line was deleted), however the rebase will still proceed. If set to "error", it will print the previous warning and stop the rebase, git rebase --edit-todo can then be used to correct the error. If set to "ignore", no checking is done. To drop a commit without warning or error, use the drop command in the todo list. Defaults to "ignore". rebase.instructionFormat A format string, as specified in git-log(1), to be used for the todo list during an interactive rebase. The format will automatically have the long commit hash prepended to the format. rebase.abbreviateCommands If set to true, git rebase will use abbreviated command names in the todo list resulting in something like this: p deadbee The oneline of the commit p fa1afe1 The oneline of the next commit ... instead of: pick deadbee The oneline of the commit pick fa1afe1 The oneline of the next commit ... Defaults to false. rebase.rescheduleFailedExec Automatically reschedule exec commands that failed. This only makes sense in interactive mode (or when an --exec option was provided). This is the same as specifying the --reschedule-failed-exec option. rebase.forkPoint If set to false set --no-fork-point option by default. rebase.rebaseMerges Whether and how to set the --rebase-merges option by default. Can be rebase-cousins, no-rebase-cousins, or a boolean. Setting to true or to no-rebase-cousins is equivalent to --rebase-merges=no-rebase-cousins, setting to rebase-cousins is equivalent to --rebase-merges=rebase-cousins, and setting to false is equivalent to --no-rebase-merges. Passing --rebase-merges on the command line, with or without an argument, overrides any rebase.rebaseMerges configuration. rebase.maxLabelLength When generating label names from commit subjects, truncate the names to this length. By default, the names are truncated to a little less than NAME_MAX (to allow e.g. .lock files to be written for the corresponding loose refs). receive.advertiseAtomic By default, git-receive-pack will advertise the atomic push capability to its clients. If you dont want to advertise this capability, set this variable to false. receive.advertisePushOptions When set to true, git-receive-pack will advertise the push options capability to its clients. False by default. receive.autogc By default, git-receive-pack will run "git-gc --auto" after receiving data from git-push and updating refs. You can stop it by setting this variable to false. receive.certNonceSeed By setting this variable to a string, git receive-pack will accept a git push --signed and verify it by using a "nonce" protected by HMAC using this string as a secret key. receive.certNonceSlop When a git push --signed sends a push certificate with a "nonce" that was issued by a receive-pack serving the same repository within this many seconds, export the "nonce" found in the certificate to GIT_PUSH_CERT_NONCE to the hooks (instead of what the receive-pack asked the sending side to include). This may allow writing checks in pre-receive and post-receive a bit easier. Instead of checking GIT_PUSH_CERT_NONCE_SLOP environment variable that records by how many seconds the nonce is stale to decide if they want to accept the certificate, they only can check GIT_PUSH_CERT_NONCE_STATUS is OK. receive.fsckObjects If it is set to true, git-receive-pack will check all received objects. See transfer.fsckObjects for whats checked. Defaults to false. If not set, the value of transfer.fsckObjects is used instead. receive.fsck.<msg-id> Acts like fsck.<msg-id>, but is used by git-receive-pack(1) instead of git-fsck(1). See the fsck.<msg-id> documentation for details. receive.fsck.skipList Acts like fsck.skipList, but is used by git-receive-pack(1) instead of git-fsck(1). See the fsck.skipList documentation for details. receive.keepAlive After receiving the pack from the client, receive-pack may produce no output (if --quiet was specified) while processing the pack, causing some networks to drop the TCP connection. With this option set, if receive-pack does not transmit any data in this phase for receive.keepAlive seconds, it will send a short keepalive packet. The default is 5 seconds; set to 0 to disable keepalives entirely. receive.unpackLimit If the number of objects received in a push is below this limit then the objects will be unpacked into loose object files. However if the number of received objects equals or exceeds this limit then the received pack will be stored as a pack, after adding any missing delta bases. Storing the pack from a push can make the push operation complete faster, especially on slow filesystems. If not set, the value of transfer.unpackLimit is used instead. receive.maxInputSize If the size of the incoming pack stream is larger than this limit, then git-receive-pack will error out, instead of accepting the pack file. If not set or set to 0, then the size is unlimited. receive.denyDeletes If set to true, git-receive-pack will deny a ref update that deletes the ref. Use this to prevent such a ref deletion via a push. receive.denyDeleteCurrent If set to true, git-receive-pack will deny a ref update that deletes the currently checked out branch of a non-bare repository. receive.denyCurrentBranch If set to true or "refuse", git-receive-pack will deny a ref update to the currently checked out branch of a non-bare repository. Such a push is potentially dangerous because it brings the HEAD out of sync with the index and working tree. If set to "warn", print a warning of such a push to stderr, but allow the push to proceed. If set to false or "ignore", allow such pushes with no message. Defaults to "refuse". Another option is "updateInstead" which will update the working tree if pushing into the current branch. This option is intended for synchronizing working directories when one side is not easily accessible via interactive ssh (e.g. a live web site, hence the requirement that the working directory be clean). This mode also comes in handy when developing inside a VM to test and fix code on different Operating Systems. By default, "updateInstead" will refuse the push if the working tree or the index have any difference from the HEAD, but the push-to-checkout hook can be used to customize this. See githooks(5). receive.denyNonFastForwards If set to true, git-receive-pack will deny a ref update which is not a fast-forward. Use this to prevent such an update via a push, even if that push is forced. This configuration variable is set when initializing a shared repository. receive.hideRefs This variable is the same as transfer.hideRefs, but applies only to receive-pack (and so affects pushes, but not fetches). An attempt to update or delete a hidden ref by git push is rejected. receive.procReceiveRefs This is a multi-valued variable that defines reference prefixes to match the commands in receive-pack. Commands matching the prefixes will be executed by an external hook "proc-receive", instead of the internal execute_commands function. If this variable is not defined, the "proc-receive" hook will never be used, and all commands will be executed by the internal execute_commands function. For example, if this variable is set to "refs/for", pushing to reference such as "refs/for/master" will not create or update a reference named "refs/for/master", but may create or update a pull request directly by running the hook "proc-receive". Optional modifiers can be provided in the beginning of the value to filter commands for specific actions: create (a), modify (m), delete (d). A ! can be included in the modifiers to negate the reference prefix entry. E.g.: git config --system --add receive.procReceiveRefs ad:refs/heads git config --system --add receive.procReceiveRefs !:refs/heads receive.updateServerInfo If set to true, git-receive-pack will run git-update-server-info after receiving data from git-push and updating refs. receive.shallowUpdate If set to true, .git/shallow can be updated when new refs require new shallow roots. Otherwise those refs are rejected. remote.pushDefault The remote to push to by default. Overrides branch.<name>.remote for all branches, and is overridden by branch.<name>.pushRemote for specific branches. remote.<name>.url The URL of a remote repository. See git-fetch(1) or git-push(1). remote.<name>.pushurl The push URL of a remote repository. See git-push(1). remote.<name>.proxy For remotes that require curl (http, https and ftp), the URL to the proxy to use for that remote. Set to the empty string to disable proxying for that remote. remote.<name>.proxyAuthMethod For remotes that require curl (http, https and ftp), the method to use for authenticating against the proxy in use (probably set in remote.<name>.proxy). See http.proxyAuthMethod. remote.<name>.fetch The default set of "refspec" for git-fetch(1). See git-fetch(1). remote.<name>.push The default set of "refspec" for git-push(1). See git-push(1). remote.<name>.mirror If true, pushing to this remote will automatically behave as if the --mirror option was given on the command line. remote.<name>.skipDefaultUpdate If true, this remote will be skipped by default when updating using git-fetch(1) or the update subcommand of git-remote(1). remote.<name>.skipFetchAll If true, this remote will be skipped by default when updating using git-fetch(1) or the update subcommand of git-remote(1). remote.<name>.receivepack The default program to execute on the remote side when pushing. See option --receive-pack of git-push(1). remote.<name>.uploadpack The default program to execute on the remote side when fetching. See option --upload-pack of git-fetch-pack(1). remote.<name>.tagOpt Setting this value to --no-tags disables automatic tag following when fetching from remote <name>. Setting it to --tags will fetch every tag from remote <name>, even if they are not reachable from remote branch heads. Passing these flags directly to git-fetch(1) can override this setting. See options --tags and --no-tags of git-fetch(1). remote.<name>.vcs Setting this to a value <vcs> will cause Git to interact with the remote with the git-remote-<vcs> helper. remote.<name>.prune When set to true, fetching from this remote by default will also remove any remote-tracking references that no longer exist on the remote (as if the --prune option was given on the command line). Overrides fetch.prune settings, if any. remote.<name>.pruneTags When set to true, fetching from this remote by default will also remove any local tags that no longer exist on the remote if pruning is activated in general via remote.<name>.prune, fetch.prune or --prune. Overrides fetch.pruneTags settings, if any. See also remote.<name>.prune and the PRUNING section of git-fetch(1). remote.<name>.promisor When set to true, this remote will be used to fetch promisor objects. remote.<name>.partialclonefilter The filter that will be applied when fetching from this promisor remote. Changing or clearing this value will only affect fetches for new commits. To fetch associated objects for commits already present in the local object database, use the --refetch option of git-fetch(1). remotes.<group> The list of remotes which are fetched by "git remote update <group>". See git-remote(1). repack.useDeltaBaseOffset By default, git-repack(1) creates packs that use delta-base offset. If you need to share your repository with Git older than version 1.4.4, either directly or via a dumb protocol such as http, then you need to set this option to "false" and repack. Access from old Git versions over the native protocol are unaffected by this option. repack.packKeptObjects If set to true, makes git repack act as if --pack-kept-objects was passed. See git-repack(1) for details. Defaults to false normally, but true if a bitmap index is being written (either via --write-bitmap-index or repack.writeBitmaps). repack.useDeltaIslands If set to true, makes git repack act as if --delta-islands was passed. Defaults to false. repack.writeBitmaps When true, git will write a bitmap index when packing all objects to disk (e.g., when git repack -a is run). This index can speed up the "counting objects" phase of subsequent packs created for clones and fetches, at the cost of some disk space and extra time spent on the initial repack. This has no effect if multiple packfiles are created. Defaults to true on bare repos, false otherwise. repack.updateServerInfo If set to false, git-repack(1) will not run git-update-server-info(1). Defaults to true. Can be overridden when true by the -n option of git-repack(1). repack.cruftWindow, repack.cruftWindowMemory, repack.cruftDepth, repack.cruftThreads Parameters used by git-pack-objects(1) when generating a cruft pack and the respective parameters are not given over the command line. See similarly named pack.* configuration variables for defaults and meaning. rerere.autoUpdate When set to true, git-rerere updates the index with the resulting contents after it cleanly resolves conflicts using previously recorded resolutions. Defaults to false. rerere.enabled Activate recording of resolved conflicts, so that identical conflict hunks can be resolved automatically, should they be encountered again. By default, git-rerere(1) is enabled if there is an rr-cache directory under the $GIT_DIR, e.g. if "rerere" was previously used in the repository. revert.reference Setting this variable to true makes git revert behave as if the --reference option is given. safe.bareRepository Specifies which bare repositories Git will work with. The currently supported values are: all: Git works with all bare repositories. This is the default. explicit: Git only works with bare repositories specified via the top-level --git-dir command-line option, or the GIT_DIR environment variable (see git(1)). If you do not use bare repositories in your workflow, then it may be beneficial to set safe.bareRepository to explicit in your global config. This will protect you from attacks that involve cloning a repository that contains a bare repository and running a Git command within that directory. This config setting is only respected in protected configuration (see the section called SCOPES). This prevents untrusted repositories from tampering with this value. safe.directory These config entries specify Git-tracked directories that are considered safe even if they are owned by someone other than the current user. By default, Git will refuse to even parse a Git config of a repository owned by someone else, let alone run its hooks, and this config setting allows users to specify exceptions, e.g. for intentionally shared repositories (see the --shared option in git-init(1)). This is a multi-valued setting, i.e. you can add more than one directory via git config --add. To reset the list of safe directories (e.g. to override any such directories specified in the system config), add a safe.directory entry with an empty value. This config setting is only respected in protected configuration (see the section called SCOPES). This prevents untrusted repositories from tampering with this value. The value of this setting is interpolated, i.e. ~/<path> expands to a path relative to the home directory and %(prefix)/<path> expands to a path relative to Gits (runtime) prefix. To completely opt-out of this security check, set safe.directory to the string *. This will allow all repositories to be treated as if their directory was listed in the safe.directory list. If safe.directory=* is set in system config and you want to re-enable this protection, then initialize your list with an empty value before listing the repositories that you deem safe. As explained, Git only allows you to access repositories owned by yourself, i.e. the user who is running Git, by default. When Git is running as root in a non Windows platform that provides sudo, however, git checks the SUDO_UID environment variable that sudo creates and will allow access to the uid recorded as its value in addition to the id from root. This is to make it easy to perform a common sequence during installation "make && sudo make install". A git process running under sudo runs as root but the sudo command exports the environment variable to record which id the original user has. If that is not what you would prefer and want git to only trust repositories that are owned by root instead, then you can remove the SUDO_UID variable from roots environment before invoking git. sendemail.identity A configuration identity. When given, causes values in the sendemail.<identity> subsection to take precedence over values in the sendemail section. The default identity is the value of sendemail.identity. sendemail.smtpEncryption See git-send-email(1) for description. Note that this setting is not subject to the identity mechanism. sendemail.smtpsslcertpath Path to ca-certificates (either a directory or a single file). Set it to an empty string to disable certificate verification. sendemail.<identity>.* Identity-specific versions of the sendemail.* parameters found below, taking precedence over those when this identity is selected, through either the command-line or sendemail.identity. sendemail.multiEdit If true (default), a single editor instance will be spawned to edit files you have to edit (patches when --annotate is used, and the summary when --compose is used). If false, files will be edited one after the other, spawning a new editor each time. sendemail.confirm Sets the default for whether to confirm before sending. Must be one of always, never, cc, compose, or auto. See --confirm in the git-send-email(1) documentation for the meaning of these values. sendemail.aliasesFile To avoid typing long email addresses, point this to one or more email aliases files. You must also supply sendemail.aliasFileType. sendemail.aliasFileType Format of the file(s) specified in sendemail.aliasesFile. Must be one of mutt, mailrc, pine, elm, gnus, or sendmail. What an alias file in each format looks like can be found in the documentation of the email program of the same name. The differences and limitations from the standard formats are described below: sendmail Quoted aliases and quoted addresses are not supported: lines that contain a " symbol are ignored. Redirection to a file (/path/name) or pipe (|command) is not supported. File inclusion (:include: /path/name) is not supported. Warnings are printed on the standard error output for any explicitly unsupported constructs, and any other lines that are not recognized by the parser. sendemail.annotate, sendemail.bcc, sendemail.cc, sendemail.ccCmd, sendemail.chainReplyTo, sendemail.envelopeSender, sendemail.from, sendemail.headerCmd, sendemail.signedoffbycc, sendemail.smtpPass, sendemail.suppresscc, sendemail.suppressFrom, sendemail.to, sendemail.tocmd, sendemail.smtpDomain, sendemail.smtpServer, sendemail.smtpServerPort, sendemail.smtpServerOption, sendemail.smtpUser, sendemail.thread, sendemail.transferEncoding, sendemail.validate, sendemail.xmailer These configuration variables all provide a default for git-send-email(1) command-line options. See its documentation for details. sendemail.signedoffcc (deprecated) Deprecated alias for sendemail.signedoffbycc. sendemail.smtpBatchSize Number of messages to be sent per connection, after that a relogin will happen. If the value is 0 or undefined, send all messages in one connection. See also the --batch-size option of git-send-email(1). sendemail.smtpReloginDelay Seconds to wait before reconnecting to the smtp server. See also the --relogin-delay option of git-send-email(1). sendemail.forbidSendmailVariables To avoid common misconfiguration mistakes, git-send-email(1) will abort with a warning if any configuration options for "sendmail" exist. Set this variable to bypass the check. sequence.editor Text editor used by git rebase -i for editing the rebase instruction file. The value is meant to be interpreted by the shell when it is used. It can be overridden by the GIT_SEQUENCE_EDITOR environment variable. When not configured, the default commit message editor is used instead. showBranch.default The default set of branches for git-show-branch(1). See git-show-branch(1). sparse.expectFilesOutsideOfPatterns Typically with sparse checkouts, files not matching any sparsity patterns are marked with a SKIP_WORKTREE bit in the index and are missing from the working tree. Accordingly, Git will ordinarily check whether files with the SKIP_WORKTREE bit are in fact present in the working tree contrary to expectations. If Git finds any, it marks those paths as present by clearing the relevant SKIP_WORKTREE bits. This option can be used to tell Git that such present-despite-skipped files are expected and to stop checking for them. The default is false, which allows Git to automatically recover from the list of files in the index and working tree falling out of sync. Set this to true if you are in a setup where some external factor relieves Git of the responsibility for maintaining the consistency between the presence of working tree files and sparsity patterns. For example, if you have a Git-aware virtual file system that has a robust mechanism for keeping the working tree and the sparsity patterns up to date based on access patterns. Regardless of this setting, Git does not check for present-despite-skipped files unless sparse checkout is enabled, so this config option has no effect unless core.sparseCheckout is true. splitIndex.maxPercentChange When the split index feature is used, this specifies the percent of entries the split index can contain compared to the total number of entries in both the split index and the shared index before a new shared index is written. The value should be between 0 and 100. If the value is 0, then a new shared index is always written; if it is 100, a new shared index is never written. By default, the value is 20, so a new shared index is written if the number of entries in the split index would be greater than 20 percent of the total number of entries. See git-update-index(1). splitIndex.sharedIndexExpire When the split index feature is used, shared index files that were not modified since the time this variable specifies will be removed when a new shared index file is created. The value "now" expires all entries immediately, and "never" suppresses expiration altogether. The default value is "2.weeks.ago". Note that a shared index file is considered modified (for the purpose of expiration) each time a new split-index file is either created based on it or read from it. See git-update-index(1). ssh.variant By default, Git determines the command line arguments to use based on the basename of the configured SSH command (configured using the environment variable GIT_SSH or GIT_SSH_COMMAND or the config setting core.sshCommand). If the basename is unrecognized, Git will attempt to detect support of OpenSSH options by first invoking the configured SSH command with the -G (print configuration) option and will subsequently use OpenSSH options (if that is successful) or no options besides the host and remote command (if it fails). The config variable ssh.variant can be set to override this detection. Valid values are ssh (to use OpenSSH options), plink, putty, tortoiseplink, simple (no options except the host and remote command). The default auto-detection can be explicitly requested using the value auto. Any other value is treated as ssh. This setting can also be overridden via the environment variable GIT_SSH_VARIANT. The current command-line parameters used for each variant are as follows: ssh - [-p port] [-4] [-6] [-o option] [username@]host command simple - [username@]host command plink or putty - [-P port] [-4] [-6] [username@]host command tortoiseplink - [-P port] [-4] [-6] -batch [username@]host command Except for the simple variant, command-line parameters are likely to change as git gains new features. status.relativePaths By default, git-status(1) shows paths relative to the current directory. Setting this variable to false shows paths relative to the repository root (this was the default for Git prior to v1.5.4). status.short Set to true to enable --short by default in git-status(1). The option --no-short takes precedence over this variable. status.branch Set to true to enable --branch by default in git-status(1). The option --no-branch takes precedence over this variable. status.aheadBehind Set to true to enable --ahead-behind and false to enable --no-ahead-behind by default in git-status(1) for non-porcelain status formats. Defaults to true. status.displayCommentPrefix If set to true, git-status(1) will insert a comment prefix before each output line (starting with core.commentChar, i.e. # by default). This was the behavior of git-status(1) in Git 1.8.4 and previous. Defaults to false. status.renameLimit The number of files to consider when performing rename detection in git-status(1) and git-commit(1). Defaults to the value of diff.renameLimit. status.renames Whether and how Git detects renames in git-status(1) and git-commit(1) . If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. If set to "copies" or "copy", Git will detect copies, as well. Defaults to the value of diff.renames. status.showStash If set to true, git-status(1) will display the number of entries currently stashed away. Defaults to false. status.showUntrackedFiles By default, git-status(1) and git-commit(1) show files which are not currently tracked by Git. Directories which contain only untracked files, are shown with the directory name only. Showing untracked files means that Git needs to lstat() all the files in the whole repository, which might be slow on some systems. So, this variable controls how the commands display the untracked files. Possible values are: no - Show no untracked files. normal - Show untracked files and directories. all - Show also individual files in untracked directories. If this variable is not specified, it defaults to normal. This variable can be overridden with the -u|--untracked-files option of git-status(1) and git-commit(1). status.submoduleSummary Defaults to false. If this is set to a non-zero number or true (identical to -1 or an unlimited number), the submodule summary will be enabled and a summary of commits for modified submodules will be shown (see --summary-limit option of git-submodule(1)). Please note that the summary output command will be suppressed for all submodules when diff.ignoreSubmodules is set to all or only for those submodules where submodule.<name>.ignore=all. The only exception to that rule is that status and commit will show staged submodule changes. To also view the summary for ignored submodules you can either use the --ignore-submodules=dirty command-line option or the git submodule summary command, which shows a similar output but does not honor these settings. stash.showIncludeUntracked If this is set to true, the git stash show command will show the untracked files of a stash entry. Defaults to false. See the description of the show command in git-stash(1). stash.showPatch If this is set to true, the git stash show command without an option will show the stash entry in patch form. Defaults to false. See the description of the show command in git-stash(1). stash.showStat If this is set to true, the git stash show command without an option will show a diffstat of the stash entry. Defaults to true. See the description of the show command in git-stash(1). submodule.<name>.url The URL for a submodule. This variable is copied from the .gitmodules file to the git config via git submodule init. The user can change the configured URL before obtaining the submodule via git submodule update. If neither submodule.<name>.active nor submodule.active are set, the presence of this variable is used as a fallback to indicate whether the submodule is of interest to git commands. See git-submodule(1) and gitmodules(5) for details. submodule.<name>.update The method by which a submodule is updated by git submodule update, which is the only affected command, others such as git checkout --recurse-submodules are unaffected. It exists for historical reasons, when git submodule was the only command to interact with submodules; settings like submodule.active and pull.rebase are more specific. It is populated by git submodule init from the gitmodules(5) file. See description of update command in git-submodule(1). submodule.<name>.branch The remote branch name for a submodule, used by git submodule update --remote. Set this option to override the value found in the .gitmodules file. See git-submodule(1) and gitmodules(5) for details. submodule.<name>.fetchRecurseSubmodules This option can be used to control recursive fetching of this submodule. It can be overridden by using the --[no-]recurse-submodules command-line option to "git fetch" and "git pull". This setting will override that from in the gitmodules(5) file. submodule.<name>.ignore Defines under what circumstances "git status" and the diff family show a submodule as modified. When set to "all", it will never be considered modified (but it will nonetheless show up in the output of status and commit when it has been staged), "dirty" will ignore all changes to the submodules work tree and takes only differences between the HEAD of the submodule and the commit recorded in the superproject into account. "untracked" will additionally let submodules with modified tracked files in their work tree show up. Using "none" (the default when this option is not set) also shows submodules that have untracked files in their work tree as changed. This setting overrides any setting made in .gitmodules for this submodule, both settings can be overridden on the command line by using the "--ignore-submodules" option. The git submodule commands are not affected by this setting. submodule.<name>.active Boolean value indicating if the submodule is of interest to git commands. This config option takes precedence over the submodule.active config option. See gitsubmodules(7) for details. submodule.active A repeated field which contains a pathspec used to match against a submodules path to determine if the submodule is of interest to git commands. See gitsubmodules(7) for details. submodule.recurse A boolean indicating if commands should enable the --recurse-submodules option by default. Defaults to false. When set to true, it can be deactivated via the --no-recurse-submodules option. Note that some Git commands lacking this option may call some of the above commands affected by submodule.recurse; for instance git remote update will call git fetch but does not have a --no-recurse-submodules option. For these commands a workaround is to temporarily change the configuration value by using git -c submodule.recurse=0. The following list shows the commands that accept --recurse-submodules and whether they are supported by this setting. checkout, fetch, grep, pull, push, read-tree, reset, restore and switch are always supported. clone and ls-files are not supported. branch is supported only if submodule.propagateBranches is enabled submodule.propagateBranches [EXPERIMENTAL] A boolean that enables branching support when using --recurse-submodules or submodule.recurse=true. Enabling this will allow certain commands to accept --recurse-submodules and certain commands that already accept --recurse-submodules will now consider branches. Defaults to false. submodule.fetchJobs Specifies how many submodules are fetched/cloned at the same time. A positive integer allows up to that number of submodules fetched in parallel. A value of 0 will give some reasonable default. If unset, it defaults to 1. submodule.alternateLocation Specifies how the submodules obtain alternates when submodules are cloned. Possible values are no, superproject. By default no is assumed, which doesnt add references. When the value is set to superproject the submodule to be cloned computes its alternates location relative to the superprojects alternate. submodule.alternateErrorStrategy Specifies how to treat errors with the alternates for a submodule as computed via submodule.alternateLocation. Possible values are ignore, info, die. Default is die. Note that if set to ignore or info, and if there is an error with the computed alternate, the clone proceeds as if no alternate was specified. tag.forceSignAnnotated A boolean to specify whether annotated tags created should be GPG signed. If --annotate is specified on the command line, it takes precedence over this option. tag.sort This variable controls the sort ordering of tags when displayed by git-tag(1). Without the "--sort=<value>" option provided, the value of this variable will be used as the default. tag.gpgSign A boolean to specify whether all tags should be GPG signed. Use of this option when running in an automated script can result in a large number of tags being signed. It is therefore convenient to use an agent to avoid typing your gpg passphrase several times. Note that this option doesnt affect tag signing behavior enabled by "-u <keyid>" or "--local-user=<keyid>" options. tar.umask This variable can be used to restrict the permission bits of tar archive entries. The default is 0002, which turns off the world write bit. The special value "user" indicates that the archiving users umask will be used instead. See umask(2) and git-archive(1). Trace2 config settings are only read from the system and global config files; repository local and worktree config files and -c command line arguments are not respected. trace2.normalTarget This variable controls the normal target destination. It may be overridden by the GIT_TRACE2 environment variable. The following table shows possible values. trace2.perfTarget This variable controls the performance target destination. It may be overridden by the GIT_TRACE2_PERF environment variable. The following table shows possible values. trace2.eventTarget This variable controls the event target destination. It may be overridden by the GIT_TRACE2_EVENT environment variable. The following table shows possible values. 0 or false - Disables the target. 1 or true - Writes to STDERR. [2-9] - Writes to the already opened file descriptor. <absolute-pathname> - Writes to the file in append mode. If the target already exists and is a directory, the traces will be written to files (one per process) underneath the given directory. af_unix:[<socket_type>:]<absolute-pathname> - Write to a Unix DomainSocket (on platforms that support them). Socket type can be either stream or dgram; if omitted Git will try both. trace2.normalBrief Boolean. When true time, filename, and line fields are omitted from normal output. May be overridden by the GIT_TRACE2_BRIEF environment variable. Defaults to false. trace2.perfBrief Boolean. When true time, filename, and line fields are omitted from PERF output. May be overridden by the GIT_TRACE2_PERF_BRIEF environment variable. Defaults to false. trace2.eventBrief Boolean. When true time, filename, and line fields are omitted from event output. May be overridden by the GIT_TRACE2_EVENT_BRIEF environment variable. Defaults to false. trace2.eventNesting Integer. Specifies desired depth of nested regions in the event output. Regions deeper than this value will be omitted. May be overridden by the GIT_TRACE2_EVENT_NESTING environment variable. Defaults to 2. trace2.configParams A comma-separated list of patterns of "important" config settings that should be recorded in the trace2 output. For example, core.*,remote.*.url would cause the trace2 output to contain events listing each configured remote. May be overridden by the GIT_TRACE2_CONFIG_PARAMS environment variable. Unset by default. trace2.envVars A comma-separated list of "important" environment variables that should be recorded in the trace2 output. For example, GIT_HTTP_USER_AGENT,GIT_CONFIG would cause the trace2 output to contain events listing the overrides for HTTP user agent and the location of the Git configuration file (assuming any are set). May be overridden by the GIT_TRACE2_ENV_VARS environment variable. Unset by default. trace2.destinationDebug Boolean. When true Git will print error messages when a trace target destination cannot be opened for writing. By default, these errors are suppressed and tracing is silently disabled. May be overridden by the GIT_TRACE2_DST_DEBUG environment variable. trace2.maxFiles Integer. When writing trace files to a target directory, do not write additional traces if doing so would exceed this many files. Instead, write a sentinel file that will block further tracing to this directory. Defaults to 0, which disables this check. transfer.credentialsInUrl A configured URL can contain plaintext credentials in the form <protocol>://<user>:<password>@<domain>/<path>. You may want to warn or forbid the use of such configuration (in favor of using git-credential(1)). This will be used on git-clone(1), git-fetch(1), git-push(1), and any other direct use of the configured URL. Note that this is currently limited to detecting credentials in remote.<name>.url configuration; it wont detect credentials in remote.<name>.pushurl configuration. You might want to enable this to prevent inadvertent credentials exposure, e.g. because: The OS or system where youre running git may not provide a way or otherwise allow you to configure the permissions of the configuration file where the username and/or password are stored. Even if it does, having such data stored "at rest" might expose you in other ways, e.g. a backup process might copy the data to another system. The git programs will pass the full URL to one another as arguments on the command-line, meaning the credentials will be exposed to other unprivileged users on systems that allow them to see the full process list of other users. On linux the "hidepid" setting documented in procfs(5) allows for configuring this behavior. If such concerns dont apply to you then you probably dont need to be concerned about credentials exposure due to storing sensitive data in gits configuration files. If you do want to use this, set transfer.credentialsInUrl to one of these values: allow (default): Git will proceed with its activity without warning. warn: Git will write a warning message to stderr when parsing a URL with a plaintext credential. die: Git will write a failure message to stderr when parsing a URL with a plaintext credential. transfer.fsckObjects When fetch.fsckObjects or receive.fsckObjects are not set, the value of this variable is used instead. Defaults to false. When set, the fetch or receive will abort in the case of a malformed object or a link to a nonexistent object. In addition, various other issues are checked for, including legacy issues (see fsck.<msg-id>), and potential security issues like the existence of a .GIT directory or a malicious .gitmodules file (see the release notes for v2.2.1 and v2.17.1 for details). Other sanity and security checks may be added in future releases. On the receiving side, failing fsckObjects will make those objects unreachable, see "QUARANTINE ENVIRONMENT" in git-receive-pack(1). On the fetch side, malformed objects will instead be left unreferenced in the repository. Due to the non-quarantine nature of the fetch.fsckObjects implementation it cannot be relied upon to leave the object store clean like receive.fsckObjects can. As objects are unpacked theyre written to the object store, so there can be cases where malicious objects get introduced even though the "fetch" failed, only to have a subsequent "fetch" succeed because only new incoming objects are checked, not those that have already been written to the object store. That difference in behavior should not be relied upon. In the future, such objects may be quarantined for "fetch" as well. For now, the paranoid need to find some way to emulate the quarantine environment if theyd like the same protection as "push". E.g. in the case of an internal mirror do the mirroring in two steps, one to fetch the untrusted objects, and then do a second "push" (which will use the quarantine) to another internal repo, and have internal clients consume this pushed-to repository, or embargo internal fetches and only allow them once a full "fsck" has run (and no new fetches have happened in the meantime). transfer.hideRefs String(s) receive-pack and upload-pack use to decide which refs to omit from their initial advertisements. Use more than one definition to specify multiple prefix strings. A ref that is under the hierarchies listed in the value of this variable is excluded, and is hidden when responding to git push or git fetch. See receive.hideRefs and uploadpack.hideRefs for program-specific versions of this config. You may also include a ! in front of the ref name to negate the entry, explicitly exposing it, even if an earlier entry marked it as hidden. If you have multiple hideRefs values, later entries override earlier ones (and entries in more-specific config files override less-specific ones). If a namespace is in use, the namespace prefix is stripped from each reference before it is matched against transfer.hiderefs patterns. In order to match refs before stripping, add a ^ in front of the ref name. If you combine ! and ^, ! must be specified first. For example, if refs/heads/master is specified in transfer.hideRefs and the current namespace is foo, then refs/namespaces/foo/refs/heads/master is omitted from the advertisements. If uploadpack.allowRefInWant is set, upload-pack will treat want-ref refs/heads/master in a protocol v2 fetch command as if refs/namespaces/foo/refs/heads/master did not exist. receive-pack, on the other hand, will still advertise the object id the ref is pointing to without mentioning its name (a so-called ".have" line). Even if you hide refs, a client may still be able to steal the target objects via the techniques described in the "SECURITY" section of the gitnamespaces(7) man page; its best to keep private data in a separate repository. transfer.unpackLimit When fetch.unpackLimit or receive.unpackLimit are not set, the value of this variable is used instead. The default value is 100. transfer.advertiseSID Boolean. When true, client and server processes will advertise their unique session IDs to their remote counterpart. Defaults to false. transfer.bundleURI When true, local git clone commands will request bundle information from the remote server (if advertised) and download bundles before continuing the clone through the Git protocol. Defaults to false. uploadarchive.allowUnreachable If true, allow clients to use git archive --remote to request any tree, whether reachable from the ref tips or not. See the discussion in the "SECURITY" section of git-upload-archive(1) for more details. Defaults to false. uploadpack.hideRefs This variable is the same as transfer.hideRefs, but applies only to upload-pack (and so affects only fetches, not pushes). An attempt to fetch a hidden ref by git fetch will fail. See also uploadpack.allowTipSHA1InWant. uploadpack.allowTipSHA1InWant When uploadpack.hideRefs is in effect, allow upload-pack to accept a fetch request that asks for an object at the tip of a hidden ref (by default, such a request is rejected). See also uploadpack.hideRefs. Even if this is false, a client may be able to steal objects via the techniques described in the "SECURITY" section of the gitnamespaces(7) man page; its best to keep private data in a separate repository. uploadpack.allowReachableSHA1InWant Allow upload-pack to accept a fetch request that asks for an object that is reachable from any ref tip. However, note that calculating object reachability is computationally expensive. Defaults to false. Even if this is false, a client may be able to steal objects via the techniques described in the "SECURITY" section of the gitnamespaces(7) man page; its best to keep private data in a separate repository. uploadpack.allowAnySHA1InWant Allow upload-pack to accept a fetch request that asks for any object at all. Defaults to false. uploadpack.keepAlive When upload-pack has started pack-objects, there may be a quiet period while pack-objects prepares the pack. Normally it would output progress information, but if --quiet was used for the fetch, pack-objects will output nothing at all until the pack data begins. Some clients and networks may consider the server to be hung and give up. Setting this option instructs upload-pack to send an empty keepalive packet every uploadpack.keepAlive seconds. Setting this option to 0 disables keepalive packets entirely. The default is 5 seconds. uploadpack.packObjectsHook If this option is set, when upload-pack would run git pack-objects to create a packfile for a client, it will run this shell command instead. The pack-objects command and arguments it would have run (including the git pack-objects at the beginning) are appended to the shell command. The stdin and stdout of the hook are treated as if pack-objects itself was run. I.e., upload-pack will feed input intended for pack-objects to the hook, and expects a completed packfile on stdout. Note that this configuration variable is only respected when it is specified in protected configuration (see the section called SCOPES). This is a safety measure against fetching from untrusted repositories. uploadpack.allowFilter If this option is set, upload-pack will support partial clone and partial fetch object filtering. uploadpackfilter.allow Provides a default value for unspecified object filters (see: the below configuration variable). If set to true, this will also enable all filters which get added in the future. Defaults to true. uploadpackfilter.<filter>.allow Explicitly allow or ban the object filter corresponding to <filter>, where <filter> may be one of: blob:none, blob:limit, object:type, tree, sparse:oid, or combine. If using combined filters, both combine and all of the nested filter kinds must be allowed. Defaults to uploadpackfilter.allow. uploadpackfilter.tree.maxDepth Only allow --filter=tree:<n> when <n> is no more than the value of uploadpackfilter.tree.maxDepth. If set, this also implies uploadpackfilter.tree.allow=true, unless this configuration variable had already been set. Has no effect if unset. uploadpack.allowRefInWant If this option is set, upload-pack will support the ref-in-want feature of the protocol version 2 fetch command. This feature is intended for the benefit of load-balanced servers which may not have the same view of what OIDs their refs point to due to replication delay. url.<base>.insteadOf Any URL that starts with this value will be rewritten to start, instead, with <base>. In cases where some site serves a large number of repositories, and serves them with multiple access methods, and some users need to use different access methods, this feature allows people to specify any of the equivalent URLs and have Git automatically rewrite the URL to the best alternative for the particular user, even for a never-before-seen repository on the site. When more than one insteadOf strings match a given URL, the longest match is used. Note that any protocol restrictions will be applied to the rewritten URL. If the rewrite changes the URL to use a custom protocol or remote helper, you may need to adjust the protocol.*.allow config to permit the request. In particular, protocols you expect to use for submodules must be set to always rather than the default of user. See the description of protocol.allow above. url.<base>.pushInsteadOf Any URL that starts with this value will not be pushed to; instead, it will be rewritten to start with <base>, and the resulting URL will be pushed to. In cases where some site serves a large number of repositories, and serves them with multiple access methods, some of which do not allow push, this feature allows people to specify a pull-only URL and have Git automatically use an appropriate URL to push, even for a never-before-seen repository on the site. When more than one pushInsteadOf strings match a given URL, the longest match is used. If a remote has an explicit pushurl, Git will ignore this setting for that remote. user.name, user.email, author.name, author.email, committer.name, committer.email The user.name and user.email variables determine what ends up in the author and committer fields of commit objects. If you need the author or committer to be different, the author.name, author.email, committer.name, or committer.email variables can be set. All of these can be overridden by the GIT_AUTHOR_NAME, GIT_AUTHOR_EMAIL, GIT_COMMITTER_NAME, GIT_COMMITTER_EMAIL, and EMAIL environment variables. Note that the name forms of these variables conventionally refer to some form of a personal name. See git-commit(1) and the environment variables section of git(1) for more information on these settings and the credential.username option if youre looking for authentication credentials instead. user.useConfigOnly Instruct Git to avoid trying to guess defaults for user.email and user.name, and instead retrieve the values only from the configuration. For example, if you have multiple email addresses and would like to use a different one for each repository, then with this configuration option set to true in the global config along with a name, Git will prompt you to set up an email before making new commits in a newly cloned repository. Defaults to false. user.signingKey If git-tag(1) or git-commit(1) is not selecting the key you want it to automatically when creating a signed tag or commit, you can override the default selection with this variable. This option is passed unchanged to gpgs --local-user parameter, so you may specify a key using any method that gpg supports. If gpg.format is set to ssh this can contain the path to either your private ssh key or the public key when ssh-agent is used. Alternatively it can contain a public key prefixed with key:: directly (e.g.: "key::ssh-rsa XXXXXX identifier"). The private key needs to be available via ssh-agent. If not set Git will call gpg.ssh.defaultKeyCommand (e.g.: "ssh-add -L") and try to use the first key available. For backward compatibility, a raw key which begins with "ssh-", such as "ssh-rsa XXXXXX identifier", is treated as "key::ssh-rsa XXXXXX identifier", but this form is deprecated; use the key:: form instead. versionsort.prereleaseSuffix (deprecated) Deprecated alias for versionsort.suffix. Ignored if versionsort.suffix is set. versionsort.suffix Even when version sort is used in git-tag(1), tagnames with the same base version but different suffixes are still sorted lexicographically, resulting e.g. in prerelease tags appearing after the main release (e.g. "1.0-rc1" after "1.0"). This variable can be specified to determine the sorting order of tags with different suffixes. By specifying a single suffix in this variable, any tagname containing that suffix will appear before the corresponding main release. E.g. if the variable is set to "-rc", then all "1.0-rcX" tags will appear before "1.0". If specified multiple times, once per suffix, then the order of suffixes in the configuration will determine the sorting order of tagnames with those suffixes. E.g. if "-pre" appears before "-rc" in the configuration, then all "1.0-preX" tags will be listed before any "1.0-rcX" tags. The placement of the main release tag relative to tags with various suffixes can be determined by specifying the empty suffix among those other suffixes. E.g. if the suffixes "-rc", "", "-ck", and "-bfs" appear in the configuration in this order, then all "v4.8-rcX" tags are listed first, followed by "v4.8", then "v4.8-ckX" and finally "v4.8-bfsX". If more than one suffix matches the same tagname, then that tagname will be sorted according to the suffix which starts at the earliest position in the tagname. If more than one different matching suffix starts at that earliest position, then that tagname will be sorted according to the longest of those suffixes. The sorting order between different suffixes is undefined if they are in multiple config files. web.browser Specify a web browser that may be used by some commands. Currently only git-instaweb(1) and git-help(1) may use it. worktree.guessRemote If no branch is specified and neither -b nor -B nor --detach is used, then git worktree add defaults to creating a new branch from HEAD. If worktree.guessRemote is set to true, worktree add tries to find a remote-tracking branch whose name uniquely matches the new branch name. If such a branch exists, it is checked out and set as "upstream" for the new branch. If no such match can be found, it falls back to creating a new branch from the current HEAD. BUGS top When using the deprecated [section.subsection] syntax, changing a value will result in adding a multi-line key instead of a change, if the subsection is given with at least one uppercase character. For example when the config looks like [section.subsection] key = value1 and running git config section.Subsection.key value2 will result in [section.subsection] key = value1 key = value2 GIT top Part of the git(1) suite NOTES top 1. the bundle URI design document file:///home/mtk/share/doc/git-doc/technical/bundle-uri.html COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CONFIG(1) Pages that refer to this page: git(1), git-add(1), git-am(1), git-apply(1), git-blame(1), git-branch(1), git-check-ignore(1), git-check-mailmap(1), git-checkout(1), git-clean(1), git-clone(1), git-column(1), git-commit(1), git-commit-graph(1), git-commit-tree(1), git-config(1), git-diff(1), git-diff-files(1), git-diff-index(1), git-difftool(1), git-diff-tree(1), git-fast-import(1), git-fetch(1), git-for-each-ref(1), git-format-patch(1), git-fsck(1), git-fsmonitor--daemon(1), git-gc(1), git-grep(1), git-help(1), git-imap-send(1), git-init(1), git-interpret-trailers(1), git-log(1), git-ls-files(1), git-ls-remote(1), git-ls-tree(1), git-mailinfo(1), git-maintenance(1), git-merge(1), git-mergetool(1), git-merge-tree(1), git-multi-pack-index(1), git-notes(1), git-pack-objects(1), git-pull(1), git-push(1), git-range-diff(1), git-rebase(1), git-receive-pack(1), git-remote(1), git-repack(1), git-replay(1), git-reset(1), git-restore(1), git-revert(1), git-rev-list(1), git-rev-parse(1), git-rm(1), git-send-email(1), git-shortlog(1), git-show(1), git-show-branch(1), git-sparse-checkout(1), git-stash(1), git-status(1), git-switch(1), git-tag(1), git-update-index(1), git-var(1), gitweb(1), git-web--browse(1), git-worktree(1), stg(1), stg-email(1), gitattributes(5), gitformat-signature(5), githooks(5), gitmailmap(5), gitmodules(5), gitprotocol-v2(5), gitcredentials(7), gitcvs-migration(7), gitfaq(7), gitsubmodules(7), gittutorial(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git config\n\n> Manage custom configuration options for Git repositories.\n> These configurations can be local (for the current repository) or global (for the current user).\n> More information: <https://git-scm.com/docs/git-config>.\n\n- List only local configuration entries (stored in `.git/config` in the current repository):\n\n`git config --list --local`\n\n- List only global configuration entries (stored in `~/.gitconfig` by default or in `$XDG_CONFIG_HOME/git/config` if such a file exists):\n\n`git config --list --global`\n\n- List only system configuration entries (stored in `/etc/gitconfig`), and show their file location:\n\n`git config --list --system --show-origin`\n\n- Get the value of a given configuration entry:\n\n`git config alias.unstage`\n\n- Set the global value of a given configuration entry:\n\n`git config --global alias.unstage "reset HEAD --"`\n\n- Revert a global configuration entry to its default value:\n\n`git config --global --unset alias.unstage`\n\n- Edit the Git configuration for the current repository in the default editor:\n\n`git config --edit`\n\n- Edit the global Git configuration in the default editor:\n\n`git config --global --edit`\n
git-count-objects
git-count-objects(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-count-objects(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | GIT | COLOPHON GIT-COUNT-OBJECTS(1) Git Manual GIT-COUNT-OBJECTS(1) NAME top git-count-objects - Count unpacked number of objects and their disk consumption SYNOPSIS top git count-objects [-v] [-H | --human-readable] DESCRIPTION top Counts the number of unpacked object files and disk space consumed by them, to help you decide when it is a good time to repack. OPTIONS top -v, --verbose Provide more detailed reports: count: the number of loose objects size: disk space consumed by loose objects, in KiB (unless -H is specified) in-pack: the number of in-pack objects size-pack: disk space consumed by the packs, in KiB (unless -H is specified) prune-packable: the number of loose objects that are also present in the packs. These objects could be pruned using git prune-packed. garbage: the number of files in the object database that are neither valid loose objects nor valid packs size-garbage: disk space consumed by garbage files, in KiB (unless -H is specified) alternate: absolute path of alternate object databases; may appear multiple times, one line per path. Note that if the path contains non-printable characters, it may be surrounded by double-quotes and contain C-style backslashed escape sequences. -H, --human-readable Print sizes in human readable format GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-COUNT-OBJECTS(1) Pages that refer to this page: git(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git count-objects\n\n> Count the number of unpacked objects and their disk consumption.\n> More information: <https://git-scm.com/docs/git-count-objects>.\n\n- Count all objects and display the total disk usage:\n\n`git count-objects`\n\n- Display a count of all objects and their total disk usage, displaying sizes in human-readable units:\n\n`git count-objects --human-readable`\n\n- Display more verbose information:\n\n`git count-objects --verbose`\n\n- Display more verbose information, displaying sizes in human-readable units:\n\n`git count-objects --human-readable --verbose`\n
git-credential
git-credential(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-credential(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | TYPICAL USE OF GIT CREDENTIAL | INPUT/OUTPUT FORMAT | GIT | COLOPHON GIT-CREDENTIAL(1) Git Manual GIT-CREDENTIAL(1) NAME top git-credential - Retrieve and store user credentials SYNOPSIS top 'git credential' (fill|approve|reject) DESCRIPTION top Git has an internal interface for storing and retrieving credentials from system-specific helpers, as well as prompting the user for usernames and passwords. The git-credential command exposes this interface to scripts which may want to retrieve, store, or prompt for credentials in the same manner as Git. The design of this scriptable interface models the internal C API; see credential.h for more background on the concepts. git-credential takes an "action" option on the command-line (one of fill, approve, or reject) and reads a credential description on stdin (see INPUT/OUTPUT FORMAT). If the action is fill, git-credential will attempt to add "username" and "password" attributes to the description by reading config files, by contacting any configured credential helpers, or by prompting the user. The username and password attributes of the credential description are then printed to stdout together with the attributes already provided. If the action is approve, git-credential will send the description to any configured credential helpers, which may store the credential for later use. If the action is reject, git-credential will send the description to any configured credential helpers, which may erase any stored credentials matching the description. If the action is approve or reject, no output should be emitted. TYPICAL USE OF GIT CREDENTIAL top An application using git-credential will typically use git credential following these steps: 1. Generate a credential description based on the context. For example, if we want a password for https://example.com/foo.git , we might generate the following credential description (dont forget the blank line at the end; it tells git credential that the application finished feeding all the information it has): protocol=https host=example.com path=foo.git 2. Ask git-credential to give us a username and password for this description. This is done by running git credential fill, feeding the description from step (1) to its standard input. The complete credential description (including the credential per se, i.e. the login and password) will be produced on standard output, like: protocol=https host=example.com username=bob password=secr3t In most cases, this means the attributes given in the input will be repeated in the output, but Git may also modify the credential description, for example by removing the path attribute when the protocol is HTTP(s) and credential.useHttpPath is false. If the git credential knew about the password, this step may not have involved the user actually typing this password (the user may have typed a password to unlock the keychain instead, or no user interaction was done if the keychain was already unlocked) before it returned password=secr3t. 3. Use the credential (e.g., access the URL with the username and password from step (2)), and see if its accepted. 4. Report on the success or failure of the password. If the credential allowed the operation to complete successfully, then it can be marked with an "approve" action to tell git credential to reuse it in its next invocation. If the credential was rejected during the operation, use the "reject" action so that git credential will ask for a new password in its next invocation. In either case, git credential should be fed with the credential description obtained from step (2) (which also contains the fields provided in step (1)). INPUT/OUTPUT FORMAT top git credential reads and/or writes (depending on the action used) credential information in its standard input/output. This information can correspond either to keys for which git credential will obtain the login information (e.g. host, protocol, path), or to the actual credential data to be obtained (username/password). The credential is split into a set of named attributes, with one attribute per line. Each attribute is specified by a key-value pair, separated by an = (equals) sign, followed by a newline. The key may contain any bytes except =, newline, or NUL. The value may contain any bytes except newline or NUL. Attributes with keys that end with C-style array brackets [] can have multiple values. Each instance of a multi-valued attribute forms an ordered list of values - the order of the repeated attributes defines the order of the values. An empty multi-valued attribute (key[]=\n) acts to clear any previous entries and reset the list. In all cases, all bytes are treated as-is (i.e., there is no quoting, and one cannot transmit a value with newline or NUL in it). The list of attributes is terminated by a blank line or end-of-file. Git understands the following attributes: protocol The protocol over which the credential will be used (e.g., https). host The remote hostname for a network credential. This includes the port number if one was specified (e.g., "example.com:8088"). path The path with which the credential will be used. E.g., for accessing a remote https repository, this will be the repositorys path on the server. username The credentials username, if we already have one (e.g., from a URL, the configuration, the user, or from a previously run helper). password The credentials password, if we are asking it to be stored. password_expiry_utc Generated passwords such as an OAuth access token may have an expiry date. When reading credentials from helpers, git credential fill ignores expired passwords. Represented as Unix time UTC, seconds since 1970. oauth_refresh_token An OAuth refresh token may accompany a password that is an OAuth access token. Helpers must treat this attribute as confidential like the password attribute. Git itself has no special behaviour for this attribute. url When this special attribute is read by git credential, the value is parsed as a URL and treated as if its constituent parts were read (e.g., url=https://example.com would behave as if protocol=https and host=example.com had been provided). This can help callers avoid parsing URLs themselves. Note that specifying a protocol is mandatory and if the URL doesnt specify a hostname (e.g., "cert:///path/to/file") the credential will contain a hostname attribute whose value is an empty string. Components which are missing from the URL (e.g., there is no username in the example above) will be left unset. wwwauth[] When an HTTP response is received by Git that includes one or more WWW-Authenticate authentication headers, these will be passed by Git to credential helpers. Each WWW-Authenticate header value is passed as a multi-valued attribute wwwauth[], where the order of the attributes is the same as they appear in the HTTP response. This attribute is one-way from Git to pass additional information to credential helpers. Unrecognised attributes are silently discarded. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CREDENTIAL(1) Pages that refer to this page: git(1), git-config(1), git-send-email(1), gitcredentials(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git credential\n\n> Retrieve and store user credentials.\n> More information: <https://git-scm.com/docs/git-credential>.\n\n- Display credential information, retrieving the username and password from configuration files:\n\n`echo "{{url=http://example.com}}" | git credential fill`\n\n- Send credential information to all configured credential helpers to store for later use:\n\n`echo "{{url=http://example.com}}" | git credential approve`\n\n- Erase the specified credential information from all the configured credential helpers:\n\n`echo "{{url=http://example.com}}" | git credential reject`\n
git-credential-cache
git-credential-cache(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-credential-cache(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONTROLLING THE DAEMON | EXAMPLES | GIT | COLOPHON GIT-CREDENTIAL-CACHE(1) Git Manual GIT-CREDENTIAL-CACHE(1) NAME top git-credential-cache - Helper to temporarily store passwords in memory SYNOPSIS top git config credential.helper 'cache [<options>]' DESCRIPTION top This command caches credentials for use by future Git programs. The stored credentials are kept in memory of the cache-daemon process (instead of being written to a file) and are forgotten after a configurable timeout. Credentials are forgotten sooner if the cache-daemon dies, for example if the system restarts. The cache is accessible over a Unix domain socket, restricted to the current user by filesystem permissions. You probably dont want to invoke this command directly; it is meant to be used as a credential helper by other parts of Git. See gitcredentials(7) or EXAMPLES below. OPTIONS top --timeout <seconds> Number of seconds to cache credentials (default: 900). --socket <path> Use <path> to contact a running cache daemon (or start a new cache daemon if one is not started). Defaults to $XDG_CACHE_HOME/git/credential/socket unless ~/.git-credential-cache/ exists in which case ~/.git-credential-cache/socket is used instead. If your home directory is on a network-mounted filesystem, you may need to change this to a local filesystem. You must specify an absolute path. CONTROLLING THE DAEMON top If you would like the daemon to exit early, forgetting all cached credentials before their timeout, you can issue an exit action: git credential-cache exit EXAMPLES top The point of this helper is to reduce the number of times you must type your username or password. For example: $ git config credential.helper cache $ git push http://example.com/repo.git Username: <type your username> Password: <type your password> [work for 5 more minutes] $ git push http://example.com/repo.git [your credentials are used automatically] You can provide options via the credential.helper configuration variable (this example increases the cache time to 1 hour): $ git config credential.helper 'cache --timeout=3600' GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CREDENTIAL-CACHE(1) Pages that refer to this page: git(1), git-credential-cache--daemon(1), git-credential-store(1), gitcredentials(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git credential-cache\n\n> Git helper to temporarily store passwords in memory.\n> More information: <https://git-scm.com/docs/git-credential-cache>.\n\n- Store Git credentials for a specific amount of time:\n\n`git config credential.helper 'cache --timeout={{time_in_seconds}}'`\n
git-credential-store
git-credential-store(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-credential-store(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | EXAMPLES | STORAGE FORMAT | GIT | COLOPHON GIT-CREDENTIAL-STORE(1) Git Manual GIT-CREDENTIAL-STORE(1) NAME top git-credential-store - Helper to store credentials on disk SYNOPSIS top git config credential.helper 'store [<options>]' DESCRIPTION top Note Using this helper will store your passwords unencrypted on disk, protected only by filesystem permissions. If this is not an acceptable security tradeoff, try git-credential-cache(1), or find a helper that integrates with secure storage provided by your operating system. This command stores credentials indefinitely on disk for use by future Git programs. You probably dont want to invoke this command directly; it is meant to be used as a credential helper by other parts of git. See gitcredentials(7) or EXAMPLES below. OPTIONS top --file=<path> Use <path> to lookup and store credentials. The file will have its filesystem permissions set to prevent other users on the system from reading it, but it will not be encrypted or otherwise protected. If not specified, credentials will be searched for from ~/.git-credentials and $XDG_CONFIG_HOME/git/credentials, and credentials will be written to ~/.git-credentials if it exists, or $XDG_CONFIG_HOME/git/credentials if it exists and the former does not. See also the section called FILES. FILES top If not set explicitly with --file, there are two files where git-credential-store will search for credentials in order of precedence: ~/.git-credentials User-specific credentials file. $XDG_CONFIG_HOME/git/credentials Second user-specific credentials file. If $XDG_CONFIG_HOME is not set or empty, $HOME/.config/git/credentials will be used. Any credentials stored in this file will not be used if ~/.git-credentials has a matching credential as well. It is a good idea not to create this file if you sometimes use older versions of Git that do not support it. For credential lookups, the files are read in the order given above, with the first matching credential found taking precedence over credentials found in files further down the list. Credential storage will by default write to the first existing file in the list. If none of these files exist, ~/.git-credentials will be created and written to. When erasing credentials, matching credentials will be erased from all files. EXAMPLES top The point of this helper is to reduce the number of times you must type your username or password. For example: $ git config credential.helper store $ git push http://example.com/repo.git Username: <type your username> Password: <type your password> [several days later] $ git push http://example.com/repo.git [your credentials are used automatically] STORAGE FORMAT top The .git-credentials file is stored in plaintext. Each credential is stored on its own line as a URL like: https://user:pass@example.com No other kinds of lines (e.g. empty lines or comment lines) are allowed in the file, even though some may be silently ignored. Do not view or edit the file with editors. When Git needs authentication for a particular URL context, credential-store will consider that context a pattern to match against each entry in the credentials file. If the protocol, hostname, and username (if we already have one) match, then the password is returned to Git. See the discussion of configuration in gitcredentials(7) for more information. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CREDENTIAL-STORE(1) Pages that refer to this page: git(1), gitcredentials(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git credential-store\n\n> `git` helper to store passwords on disk.\n> More information: <https://git-scm.com/docs/git-credential-store>.\n\n- Store Git credentials in a specific file:\n\n`git config credential.helper 'store --file={{path/to/file}}'`\n
git-cvsexportcommit
git-cvsexportcommit(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-cvsexportcommit(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFIGURATION | EXAMPLES | GIT | COLOPHON GIT-CVSEXPORTCOMMIT(1) Git Manual GIT-CVSEXPORTCOMMIT(1) NAME top git-cvsexportcommit - Export a single commit to a CVS checkout SYNOPSIS top git cvsexportcommit [-h] [-u] [-v] [-c] [-P] [-p] [-a] [-d <cvsroot>] [-w <cvs-workdir>] [-W] [-f] [-m <msgprefix>] [<parent-commit>] <commit-id> DESCRIPTION top Exports a commit from Git to a CVS checkout, making it easier to merge patches from a Git repository into a CVS repository. Specify the name of a CVS checkout using the -w switch or execute it from the root of the CVS working copy. In the latter case GIT_DIR must be defined. See examples below. It does its best to do the safe thing, it will check that the files are unchanged and up to date in the CVS checkout, and it will not autocommit by default. Supports file additions, removals, and commits that affect binary files. If the commit is a merge commit, you must tell git cvsexportcommit what parent the changeset should be done against. OPTIONS top -c Commit automatically if the patch applied cleanly. It will not commit if any hunks fail to apply or there were other problems. -p Be pedantic (paranoid) when applying patches. Invokes patch with --fuzz=0 -a Add authorship information. Adds Author line, and Committer (if different from Author) to the message. -d Set an alternative CVSROOT to use. This corresponds to the CVS -d parameter. Usually users will not want to set this, except if using CVS in an asymmetric fashion. -f Force the merge even if the files are not up to date. -P Force the parent commit, even if it is not a direct parent. -m Prepend the commit message with the provided prefix. Useful for patch series and the like. -u Update affected files from CVS repository before attempting export. -k Reverse CVS keyword expansion (e.g. $Revision: 1.2.3.4$ becomes $Revision$) in working CVS checkout before applying patch. -w Specify the location of the CVS checkout to use for the export. This option does not require GIT_DIR to be set before execution if the current directory is within a Git repository. The default is the value of cvsexportcommit.cvsdir. -W Tell cvsexportcommit that the current working directory is not only a Git checkout, but also the CVS checkout. Therefore, Git will reset the working directory to the parent commit before proceeding. -v Verbose. CONFIGURATION top cvsexportcommit.cvsdir The default location of the CVS checkout to use for the export. EXAMPLES top Merge one patch into CVS $ export GIT_DIR=~/project/.git $ cd ~/project_cvs_checkout $ git cvsexportcommit -v <commit-sha1> $ cvs commit -F .msg <files> Merge one patch into CVS (-c and -w options). The working directory is within the Git Repo $ git cvsexportcommit -v -c -w ~/project_cvs_checkout <commit-sha1> Merge pending patches into CVS automatically only if you really know what you are doing $ export GIT_DIR=~/project/.git $ cd ~/project_cvs_checkout $ git cherry cvshead myhead | sed -n 's/^+ //p' | xargs -l1 git cvsexportcommit -c -p -v GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-CVSEXPORTCOMMIT(1) Pages that refer to this page: git(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git cvsexportcommit\n\n> Export a single `Git` commit to a CVS checkout.\n> More information: <https://git-scm.com/docs/git-cvsexportcommit>.\n\n- Merge a specific patch into CVS:\n\n`git cvsexportcommit -v -c -w {{path/to/project_cvs_checkout}} {{commit_sha1}}`\n
git-daemon
git-daemon(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-daemon(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SERVICES | EXAMPLES | ENVIRONMENT | GIT | COLOPHON GIT-DAEMON(1) Git Manual GIT-DAEMON(1) NAME top git-daemon - A really simple server for Git repositories SYNOPSIS top git daemon [--verbose] [--syslog] [--export-all] [--timeout=<n>] [--init-timeout=<n>] [--max-connections=<n>] [--strict-paths] [--base-path=<path>] [--base-path-relaxed] [--user-path | --user-path=<path>] [--interpolated-path=<pathtemplate>] [--reuseaddr] [--detach] [--pid-file=<file>] [--enable=<service>] [--disable=<service>] [--allow-override=<service>] [--forbid-override=<service>] [--access-hook=<path>] [--[no-]informative-errors] [--inetd | [--listen=<host_or_ipaddr>] [--port=<n>] [--user=<user> [--group=<group>]]] [--log-destination=(stderr|syslog|none)] [<directory>...] DESCRIPTION top A really simple TCP Git daemon that normally listens on port "DEFAULT_GIT_PORT" aka 9418. It waits for a connection asking for a service, and will serve that service if it is enabled. It verifies that the directory has the magic file "git-daemon-export-ok", and it will refuse to export any Git directory that hasnt explicitly been marked for export this way (unless the --export-all parameter is specified). If you pass some directory paths as git daemon arguments, the offers are limited to repositories within those directories. By default, only upload-pack service is enabled, which serves git fetch-pack and git ls-remote clients, which are invoked from git fetch, git pull, and git clone. This is ideally suited for read-only updates, i.e., pulling from Git repositories. An upload-archive also exists to serve git archive. OPTIONS top --strict-paths Match paths exactly (i.e. dont allow "/foo/repo" when the real path is "/foo/repo.git" or "/foo/repo/.git") and dont do user-relative paths. git daemon will refuse to start when this option is enabled and no directory arguments are provided. --base-path=<path> Remap all the path requests as relative to the given path. This is sort of "Git root" - if you run git daemon with --base-path=/srv/git on example.com, then if you later try to pull git://example.com/hello.git, git daemon will interpret the path as /srv/git/hello.git. --base-path-relaxed If --base-path is enabled and repo lookup fails, with this option git daemon will attempt to lookup without prefixing the base path. This is useful for switching to --base-path usage, while still allowing the old paths. --interpolated-path=<pathtemplate> To support virtual hosting, an interpolated path template can be used to dynamically construct alternate paths. The template supports %H for the target hostname as supplied by the client but converted to all lowercase, %CH for the canonical hostname, %IP for the servers IP address, %P for the port number, and %D for the absolute path of the named repository. After interpolation, the path is validated against the directory list. --export-all Allow pulling from all directories that look like Git repositories (have the objects and refs subdirectories), even if they do not have the git-daemon-export-ok file. --inetd Have the server run as an inetd service. Implies --syslog (may be overridden with --log-destination=). Incompatible with --detach, --port, --listen, --user and --group options. --listen=<host_or_ipaddr> Listen on a specific IP address or hostname. IP addresses can be either an IPv4 address or an IPv6 address if supported. If IPv6 is not supported, then --listen=hostname is also not supported and --listen must be given an IPv4 address. Can be given more than once. Incompatible with --inetd option. --port=<n> Listen on an alternative port. Incompatible with --inetd option. --init-timeout=<n> Timeout (in seconds) between the moment the connection is established and the client request is received (typically a rather low value, since that should be basically immediate). --timeout=<n> Timeout (in seconds) for specific client sub-requests. This includes the time it takes for the server to process the sub-request and the time spent waiting for the next clients request. --max-connections=<n> Maximum number of concurrent clients, defaults to 32. Set it to zero for no limit. --syslog Short for --log-destination=syslog. --log-destination=<destination> Send log messages to the specified destination. Note that this option does not imply --verbose, thus by default only error conditions will be logged. The <destination> must be one of: stderr Write to standard error. Note that if --detach is specified, the process disconnects from the real standard error, making this destination effectively equivalent to none. syslog Write to syslog, using the git-daemon identifier. none Disable all logging. The default destination is syslog if --inetd or --detach is specified, otherwise stderr. --user-path, --user-path=<path> Allow ~user notation to be used in requests. When specified with no parameter, a request to git://host/~alice/foo is taken as a request to access foo repository in the home directory of user alice. If --user-path=path is specified, the same request is taken as a request to access path/foo repository in the home directory of user alice. --verbose Log details about the incoming connections and requested files. --reuseaddr Use SO_REUSEADDR when binding the listening socket. This allows the server to restart without waiting for old connections to time out. --detach Detach from the shell. Implies --syslog. --pid-file=<file> Save the process id in file. Ignored when the daemon is run under --inetd. --user=<user>, --group=<group> Change daemons uid and gid before entering the service loop. When only --user is given without --group, the primary group ID for the user is used. The values of the option are given to getpwnam(3) and getgrnam(3) and numeric IDs are not supported. Giving these options is an error when used with --inetd; use the facility of inet daemon to achieve the same before spawning git daemon if needed. Like many programs that switch user id, the daemon does not reset environment variables such as $HOME when it runs git programs, e.g. upload-pack and receive-pack. When using this option, you may also want to set and export HOME to point at the home directory of <user> before starting the daemon, and make sure any Git configuration files in that directory are readable by <user>. --enable=<service>, --disable=<service> Enable/disable the service site-wide per default. Note that a service disabled site-wide can still be enabled per repository if it is marked overridable and the repository enables the service with a configuration item. --allow-override=<service>, --forbid-override=<service> Allow/forbid overriding the site-wide default with per repository configuration. By default, all the services may be overridden. --[no-]informative-errors When informative errors are turned on, git-daemon will report more verbose errors to the client, differentiating conditions like "no such repository" from "repository not exported". This is more convenient for clients, but may leak information about the existence of unexported repositories. When informative errors are not enabled, all errors report "access denied" to the client. The default is --no-informative-errors. --access-hook=<path> Every time a client connects, first run an external command specified by the <path> with service name (e.g. "upload-pack"), path to the repository, hostname (%H), canonical hostname (%CH), IP address (%IP), and TCP port (%P) as its command-line arguments. The external command can decide to decline the service by exiting with a non-zero status (or to allow it by exiting with a zero status). It can also look at the $REMOTE_ADDR and $REMOTE_PORT environment variables to learn about the requestor when making this decision. The external command can optionally write a single line to its standard output to be sent to the requestor as an error message when it declines the service. <directory> The remaining arguments provide a list of directories. If any directories are specified, then the git-daemon process will serve a requested directory only if it is contained in one of these directories. If --strict-paths is specified, then the requested directory must match one of these directories exactly. SERVICES top These services can be globally enabled/disabled using the command-line options of this command. If finer-grained control is desired (e.g. to allow git archive to be run against only in a few selected repositories the daemon serves), the per-repository configuration file can be used to enable or disable them. upload-pack This serves git fetch-pack and git ls-remote clients. It is enabled by default, but a repository can disable it by setting daemon.uploadpack configuration item to false. upload-archive This serves git archive --remote. It is disabled by default, but a repository can enable it by setting daemon.uploadarch configuration item to true. receive-pack This serves git send-pack clients, allowing anonymous push. It is disabled by default, as there is no authentication in the protocol (in other words, anybody can push anything into the repository, including removal of refs). This is solely meant for a closed LAN setting where everybody is friendly. This service can be enabled by setting daemon.receivepack configuration item to true. EXAMPLES top We assume the following in /etc/services $ grep 9418 /etc/services git 9418/tcp # Git Version Control System git daemon as inetd server To set up git daemon as an inetd service that handles any repository within /pub/foo or /pub/bar, place an entry like the following into /etc/inetd all on one line: git stream tcp nowait nobody /usr/bin/git git daemon --inetd --verbose --export-all /pub/foo /pub/bar git daemon as inetd server for virtual hosts To set up git daemon as an inetd service that handles repositories for different virtual hosts, www.example.com and www.example.org, place an entry like the following into /etc/inetd all on one line: git stream tcp nowait nobody /usr/bin/git git daemon --inetd --verbose --export-all --interpolated-path=/pub/%H%D /pub/www.example.org/software /pub/www.example.com/software /software In this example, the root-level directory /pub will contain a subdirectory for each virtual host name supported. Further, both hosts advertise repositories simply as git://www.example.com/software/repo.git. For pre-1.4.0 clients, a symlink from /software into the appropriate default repository could be made as well. git daemon as regular daemon for virtual hosts To set up git daemon as a regular, non-inetd service that handles repositories for multiple virtual hosts based on their IP addresses, start the daemon like this: git daemon --verbose --export-all --interpolated-path=/pub/%IP/%D /pub/192.168.1.200/software /pub/10.10.220.23/software In this example, the root-level directory /pub will contain a subdirectory for each virtual host IP address supported. Repositories can still be accessed by hostname though, assuming they correspond to these IP addresses. selectively enable/disable services per repository To enable git archive --remote and disable git fetch against a repository, have the following in the configuration file in the repository (that is the file config next to HEAD, refs and objects). [daemon] uploadpack = false uploadarch = true ENVIRONMENT top git daemon will set REMOTE_ADDR to the IP address of the client that connected to it, if the IP address is available. REMOTE_ADDR will be available in the environment of hooks called when services are performed. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-DAEMON(1) Pages that refer to this page: git(1), git-cvsserver(1), git-shell(1), gitweb(1), giteveryday(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git daemon\n\n> A really simple server for Git repositories.\n> More information: <https://git-scm.com/docs/git-daemon>.\n\n- Launch a Git daemon with a whitelisted set of directories:\n\n`git daemon --export-all {{path/to/directory1}} {{path/to/directory2}}`\n\n- Launch a Git daemon with a specific base directory and allow pulling from all sub-directories that look like Git repositories:\n\n`git daemon --base-path={{path/to/directory}} --export-all --reuseaddr`\n\n- Launch a Git daemon for the specified directory, verbosely printing log messages and allowing Git clients to write to it:\n\n`git daemon {{path/to/directory}} --enable=receive-pack --informative-errors --verbose`\n
git-describe
git-describe(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-describe(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | SEARCH STRATEGY | BUGS | GIT | COLOPHON GIT-DESCRIBE(1) Git Manual GIT-DESCRIBE(1) NAME top git-describe - Give an object a human readable name based on an available ref SYNOPSIS top git describe [--all] [--tags] [--contains] [--abbrev=<n>] [<commit-ish>...] git describe [--all] [--tags] [--contains] [--abbrev=<n>] --dirty[=<mark>] git describe <blob> DESCRIPTION top The command finds the most recent tag that is reachable from a commit. If the tag points to the commit, then only the tag is shown. Otherwise, it suffixes the tag name with the number of additional commits on top of the tagged object and the abbreviated object name of the most recent commit. The result is a "human-readable" object name which can also be used to identify the commit to other git commands. By default (without --all or --tags) git describe only shows annotated tags. For more information about creating annotated tags see the -a and -s options to git-tag(1). If the given object refers to a blob, it will be described as <commit-ish>:<path>, such that the blob can be found at <path> in the <commit-ish>, which itself describes the first commit in which this blob occurs in a reverse revision walk from HEAD. OPTIONS top <commit-ish>... Commit-ish object names to describe. Defaults to HEAD if omitted. --dirty[=<mark>], --broken[=<mark>] Describe the state of the working tree. When the working tree matches HEAD, the output is the same as "git describe HEAD". If the working tree has local modification "-dirty" is appended to it. If a repository is corrupt and Git cannot determine if there is local modification, Git will error out, unless --broken is given, which appends the suffix "-broken" instead. --all Instead of using only the annotated tags, use any ref found in refs/ namespace. This option enables matching any known branch, remote-tracking branch, or lightweight tag. --tags Instead of using only the annotated tags, use any tag found in refs/tags namespace. This option enables matching a lightweight (non-annotated) tag. --contains Instead of finding the tag that predates the commit, find the tag that comes after the commit, and thus contains it. Automatically implies --tags. --abbrev=<n> Instead of using the default number of hexadecimal digits (which will vary according to the number of objects in the repository with a default of 7) of the abbreviated object name, use <n> digits, or as many digits as needed to form a unique object name. An <n> of 0 will suppress long format, only showing the closest tag. --candidates=<n> Instead of considering only the 10 most recent tags as candidates to describe the input commit-ish consider up to <n> candidates. Increasing <n> above 10 will take slightly longer but may produce a more accurate result. An <n> of 0 will cause only exact matches to be output. --exact-match Only output exact matches (a tag directly references the supplied commit). This is a synonym for --candidates=0. --debug Verbosely display information about the searching strategy being employed to standard error. The tag name will still be printed to standard out. --long Always output the long format (the tag, the number of commits and the abbreviated commit name) even when it matches a tag. This is useful when you want to see parts of the commit object name in "describe" output, even when the commit in question happens to be a tagged version. Instead of just emitting the tag name, it will describe such a commit as v1.2-0-gdeadbee (0th commit since tag v1.2 that points at object deadbee....). --match <pattern> Only consider tags matching the given glob(7) pattern, excluding the "refs/tags/" prefix. If used with --all, it also considers local branches and remote-tracking references matching the pattern, excluding respectively "refs/heads/" and "refs/remotes/" prefix; references of other types are never considered. If given multiple times, a list of patterns will be accumulated, and tags matching any of the patterns will be considered. Use --no-match to clear and reset the list of patterns. --exclude <pattern> Do not consider tags matching the given glob(7) pattern, excluding the "refs/tags/" prefix. If used with --all, it also does not consider local branches and remote-tracking references matching the pattern, excluding respectively "refs/heads/" and "refs/remotes/" prefix; references of other types are never considered. If given multiple times, a list of patterns will be accumulated and tags matching any of the patterns will be excluded. When combined with --match a tag will be considered when it matches at least one --match pattern and does not match any of the --exclude patterns. Use --no-exclude to clear and reset the list of patterns. --always Show uniquely abbreviated commit object as fallback. --first-parent Follow only the first parent commit upon seeing a merge commit. This is useful when you wish to not match tags on branches merged in the history of the target commit. EXAMPLES top With something like git.git current tree, I get: [torvalds@g5 git]$ git describe parent v1.0.4-14-g2414721 i.e. the current head of my "parent" branch is based on v1.0.4, but since it has a few commits on top of that, describe has added the number of additional commits ("14") and an abbreviated object name for the commit itself ("2414721") at the end. The number of additional commits is the number of commits which would be displayed by "git log v1.0.4..parent". The hash suffix is "-g" + an unambiguous abbreviation for the tip commit of parent (which was 2414721b194453f058079d897d13c4e377f92dc6). The length of the abbreviation scales as the repository grows, using the approximate number of objects in the repository and a bit of math around the birthday paradox, and defaults to a minimum of 7. The "g" prefix stands for "git" and is used to allow describing the version of a software depending on the SCM the software is managed with. This is useful in an environment where people may use different SCMs. Doing a git describe on a tag-name will just show the tag name: [torvalds@g5 git]$ git describe v1.0.4 v1.0.4 With --all, the command can use branch heads as references, so the output shows the reference path as well: [torvalds@g5 git]$ git describe --all --abbrev=4 v1.0.5^2 tags/v1.0.0-21-g975b [torvalds@g5 git]$ git describe --all --abbrev=4 HEAD^ heads/lt/describe-7-g975b With --abbrev set to 0, the command can be used to find the closest tagname without any suffix: [torvalds@g5 git]$ git describe --abbrev=0 v1.0.5^2 tags/v1.0.0 Note that the suffix you get if you type these commands today may be longer than what Linus saw above when he ran these commands, as your Git repository may have new commits whose object names begin with 975b that did not exist back then, and "-g975b" suffix alone may not be sufficient to disambiguate these commits. SEARCH STRATEGY top For each commit-ish supplied, git describe will first look for a tag which tags exactly that commit. Annotated tags will always be preferred over lightweight tags, and tags with newer dates will always be preferred over tags with older dates. If an exact match is found, its name will be output and searching will stop. If an exact match was not found, git describe will walk back through the commit history to locate an ancestor commit which has been tagged. The ancestors tag will be output along with an abbreviation of the input commit-ishs SHA-1. If --first-parent was specified then the walk will only consider the first parent of each commit. If multiple tags were found during the walk then the tag which has the fewest commits different from the input commit-ish will be selected and output. Here fewest commits different is defined as the number of commits which would be shown by git log tag..input will be the smallest number of commits possible. BUGS top Tree objects as well as tag objects not pointing at commits, cannot be described. When describing blobs, the lightweight tags pointing at blobs are ignored, but the blob is still described as <commit-ish>:<path> despite the lightweight tag being favorable. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-DESCRIBE(1) Pages that refer to this page: git(1), git-diff-tree(1), git-for-each-ref(1), git-log(1), git-rev-list(1), git-show(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git describe\n\n> Give an object a human-readable name based on an available ref.\n> More information: <https://git-scm.com/docs/git-describe>.\n\n- Create a unique name for the current commit (the name contains the most recent annotated tag, the number of additional commits, and the abbreviated commit hash):\n\n`git describe`\n\n- Create a name with 4 digits for the abbreviated commit hash:\n\n`git describe --abbrev={{4}}`\n\n- Generate a name with the tag reference path:\n\n`git describe --all`\n\n- Describe a Git tag:\n\n`git describe {{v1.0.0}}`\n\n- Create a name for the last commit of a given branch:\n\n`git describe {{branch_name}}`\n
git-diff
git-diff(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-diff(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | RAW OUTPUT FORMAT | DIFF FORMAT FOR MERGES | GENERATING PATCH TEXT WITH -P | COMBINED DIFF FORMAT | OTHER DIFF FORMATS | EXAMPLES | CONFIGURATION | SEE ALSO | GIT | COLOPHON GIT-DIFF(1) Git Manual GIT-DIFF(1) NAME top git-diff - Show changes between commits, commit and working tree, etc SYNOPSIS top git diff [<options>] [<commit>] [--] [<path>...] git diff [<options>] --cached [--merge-base] [<commit>] [--] [<path>...] git diff [<options>] [--merge-base] <commit> [<commit>...] <commit> [--] [<path>...] git diff [<options>] <commit>...<commit> [--] [<path>...] git diff [<options>] <blob> <blob> git diff [<options>] --no-index [--] <path> <path> DESCRIPTION top Show changes between the working tree and the index or a tree, changes between the index and a tree, changes between two trees, changes resulting from a merge, changes between two blob objects, or changes between two files on disk. git diff [<options>] [--] [<path>...] This form is to view the changes you made relative to the index (staging area for the next commit). In other words, the differences are what you could tell Git to further add to the index but you still havent. You can stage these changes by using git-add(1). git diff [<options>] --no-index [--] <path> <path> This form is to compare the given two paths on the filesystem. You can omit the --no-index option when running the command in a working tree controlled by Git and at least one of the paths points outside the working tree, or when running the command outside a working tree controlled by Git. This form implies --exit-code. git diff [<options>] --cached [--merge-base] [<commit>] [--] [<path>...] This form is to view the changes you staged for the next commit relative to the named <commit>. Typically you would want comparison with the latest commit, so if you do not give <commit>, it defaults to HEAD. If HEAD does not exist (e.g. unborn branches) and <commit> is not given, it shows all staged changes. --staged is a synonym of --cached. If --merge-base is given, instead of using <commit>, use the merge base of <commit> and HEAD. git diff --cached --merge-base A is equivalent to git diff --cached $(git merge-base A HEAD). git diff [<options>] [--merge-base] <commit> [--] [<path>...] This form is to view the changes you have in your working tree relative to the named <commit>. You can use HEAD to compare it with the latest commit, or a branch name to compare with the tip of a different branch. If --merge-base is given, instead of using <commit>, use the merge base of <commit> and HEAD. git diff --merge-base A is equivalent to git diff $(git merge-base A HEAD). git diff [<options>] [--merge-base] <commit> <commit> [--] [<path>...] This is to view the changes between two arbitrary <commit>. If --merge-base is given, use the merge base of the two commits for the "before" side. git diff --merge-base A B is equivalent to git diff $(git merge-base A B) B. git diff [<options>] <commit> <commit>... <commit> [--] [<path>...] This form is to view the results of a merge commit. The first listed <commit> must be the merge itself; the remaining two or more commits should be its parents. Convenient ways to produce the desired set of revisions are to use the suffixes ^@ and ^!. If A is a merge commit, then git diff A A^@, git diff A^! and git show A all give the same combined diff. git diff [<options>] <commit>..<commit> [--] [<path>...] This is synonymous to the earlier form (without the ..) for viewing the changes between two arbitrary <commit>. If <commit> on one side is omitted, it will have the same effect as using HEAD instead. git diff [<options>] <commit>...<commit> [--] [<path>...] This form is to view the changes on the branch containing and up to the second <commit>, starting at a common ancestor of both <commit>. git diff A...B is equivalent to git diff $(git merge-base A B) B. You can omit any one of <commit>, which has the same effect as using HEAD instead. Just in case you are doing something exotic, it should be noted that all of the <commit> in the above description, except in the --merge-base case and in the last two forms that use .. notations, can be any <tree>. A tree of interest is the one pointed to by the special ref AUTO_MERGE, which is written by the ort merge strategy upon hitting merge conflicts (see git-merge(1)). Comparing the working tree with AUTO_MERGE shows changes youve made so far to resolve textual conflicts (see the examples below). For a more complete list of ways to spell <commit>, see "SPECIFYING REVISIONS" section in gitrevisions(7). However, "diff" is about comparing two endpoints, not ranges, and the range notations (<commit>..<commit> and <commit>...<commit>) do not mean a range as defined in the "SPECIFYING RANGES" section in gitrevisions(7). git diff [<options>] <blob> <blob> This form is to view the differences between the raw contents of two blob objects. OPTIONS top -p, -u, --patch Generate patch (see the section called GENERATING PATCH TEXT WITH -P). This is the default. -s, --no-patch Suppress all output from the diff machinery. Useful for commands like git show that show the patch by default to squelch their output, or to cancel the effect of options like --patch, --stat earlier on the command line in an alias. -U<n>, --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies --patch. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char>, --output-indicator-old=<char>, --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are +, - and ' ' respectively. --raw Generate the diff in raw format. --patch-with-raw Synonym for -p --raw. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: default, myers The basic greedy diff algorithm. Currently, this is the default. minimal Spend extra time to make sure the smallest possible diff is produced. patience Use "patience diff" algorithm when generating patches. histogram This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the diff.algorithm variable to a non-default value and want to use the default one, then you have to use --diff-algorithm=default option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by <width>. The width of the filename part can be limited by giving another width <name-width> after a comma or by setting diff.statNameWidth=<width>. The width of the graph part can be limited by using --stat-graph-width=<width> or by setting diff.statGraphWidth=<width>. Using --stat or --stat-graph-width affects all commands generating a stat graph, while setting diff.statNameWidth or diff.statGraphWidth does not affect git format-patch. By giving a third parameter <count>, you can limit the output to the first <count> lines, followed by ... if there are more. These parameters can also be set individually with --stat-width=<width>, --stat-name-width=<name-width> and --stat-count=<count>. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if its a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies --stat. --numstat Similar to --stat, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two - instead of saying 0 0. --shortstat Output only the last line of the --stat format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,...>], --dirstat[=<param1,param2,...>] Output the distribution of relative amount of changes for each sub-directory. The behavior of --dirstat can be customized by passing it a comma separated list of parameters. The defaults are controlled by the diff.dirstat configuration variable (see git-config(1)). The following parameters are available: changes Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. lines Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive --dirstat behavior than the changes behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other --*stat options. files Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest --dirstat behavior, since it does not have to look at the file contents at all. cumulative Count changes in a child directory for the parent directory as well. Note that when using cumulative, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the noncumulative parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: --dirstat=files,10,cumulative. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>...] Synonym for --dirstat=files,param1,param2... --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for -p --stat. -z When --raw, --numstat, --name-only or --name-status has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the git-log(1) manual page. --name-status Show only names and status of changed files. See the description of the --diff-filter option on what the status letters mean. Just like --name-only the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying --submodule=short the short format is used. This format just shows the names of the commits at the beginning and end of the range. When --submodule or --submodule=log is specified, the log format is used. This format lists the commits in the range like git-submodule(1) summary does. When --submodule=diff is specified, the diff format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to diff.submodule or the short format if the config option is unset. --color[=<when>] Show colored diff. --color (i.e. without =<when>) is the same as --color=always. <when> can be one of always, never, or auto. It can be changed by the color.ui and color.diff configuration settings. --no-color Turn off colored diff. This can be used to override configuration settings. It is the same as --color=never. --color-moved[=<mode>] Moved lines of code are colored differently. It can be changed by the diff.colorMoved configuration setting. The <mode> defaults to no if the option is not given and to zebra if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for zebra. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with color.diff.newMoved. Similarly color.diff.oldMoved will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the color.diff.{old,new}Moved color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in blocks mode. The blocks are painted using either the color.diff.{old,new}Moved color or color.diff.{old,new}MovedAlternative. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to zebra, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. dimmed_zebra is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as --color-moved=no. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for --color-moved. It can be set by the diff.colorMovedWS configuration setting. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as --color-moved-ws=no. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see --word-diff-regex below. The <mode> defaults to plain, and must be one of: color Highlight changed words using only colors. Implies --color. plain Show words as [-removed-] and {+added+}. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a +/-/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde ~ on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies --word-diff unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append |[^[:space:]] to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, --word-diff-regex=. will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see gitattributes(5) or git-config(1). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to --word-diff=color plus (if a regex was specified) --word-diff-regex=<regex>. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by core.whitespace configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the context, old or new lines of the diff. Multiple values are separated by comma, none resets previous values, default reset the list to new and all is a shorthand for old,new,context. When this option is not given, and the configuration variable diff.wsErrorHighlight is not set, only whitespace errors in new lines are highlighted. The whitespace errors are colored with color.diff.whitespace. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to --full-index, output a binary diff that can be applied with git-apply. Implies --patch. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. In diff-patch output format, --full-index takes higher precedence, i.e. if --full-index is specified, full blob names will be shown regardless of --abbrev. Non default number of digits can be specified with --abbrev=<n>. -B[<n>][/<m>], --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number m controls this aspect of the -B option (defaults to 60%). -B/70% specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number n controls this aspect of the -B option (defaults to 50%). -B20% specifies that a change with addition and deletion compared to 20% or more of the files size are eligible for being picked up as a possible source of a rename to another file. -M[<n>], --find-renames[=<n>] Detect renames. If n is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the files size). For example, -M90% means Git should consider a delete/add pair to be a rename if more than 90% of the file hasnt changed. Without a % sign, the number is to be read as a fraction, with a decimal point before it. I.e., -M5 becomes 0.5, and is thus the same as -M50%. Similarly, -M05 is the same as -M5%. To limit detection to exact renames, use -M100%. The default similarity index is 50%. -C[<n>], --find-copies[=<n>] Detect copies as well as renames. See also --find-copies-harder. If n is specified, it has the same meaning as for -M<n>. --find-copies-harder For performance reasons, by default, -C option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one -C option has the same effect. -D, --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and /dev/null. The resulting patch is not meant to be applied with patch or git apply; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with -B, omit also the preimage in the deletion part of a delete/create pair. -l<num> The -M and -C options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)...[*]] Select only files that are Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R), have their type (i.e. regular file, symlink, submodule, ...) changed (T), are Unmerged (U), are Unknown (X), or have had their pairing Broken (B). Any combination of the filter characters (including none) can be used. When * (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. --diff-filter=ad excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripters use. It is useful when youre looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into -S, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between -S<regex> --pickaxe-regex and -G<regex>, consider a commit with the following diff in the same file: + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); While git log -G"frotz\(nitfol" will show this commit, git log -S"frotz\(nitfol" --pickaxe-regex will not (because the number of occurrences of that string did not change). Unless --text is supplied patches of binary files without a textconv filter will be ignored. See the pickaxe entry in gitdiffcore(7) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to -S, just the argument is different in that it doesnt search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the -t option in git-log to also find trees. --pickaxe-all When -S or -G finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to -S as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the diff.orderFile configuration variable (see git-config(1)). To cancel diff.orderFile, use -O/dev/null. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: Blank lines are ignored, so they can be used as separators for readability. Lines starting with a hash ("#") are ignored, so they can be used for comments. Add a backslash ("\") to the beginning of the pattern if it starts with a hash. Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "foo*bar" matches "fooasdfbar" and "foo/bar/baz/asdf" but not "foobarx". --skip-to=<file>, --rotate-to=<file> Discard the files before the named <file> from the output (i.e. skip to), or move them to the end of the output (i.e. rotate to). These options were invented primarily for the use of the git difftool command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>], --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. --no-relative can be used to countermand both diff.relative config option and previous --relative. -a, --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b, --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w, --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex>, --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to diff.interHunkContext or 0 if the config option is unset. -W, --function-context Show whole function as context lines for each change. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). --exit-code Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences. --quiet Disable all output of the program. Implies --exit-code. --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with gitattributes(5), you need to use this option with git-log(1) and friends. --no-ext-diff Disallow external diff drivers. --textconv, --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See gitattributes(5) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for git-diff(1) and git-log(1), but not for git-format-patch(1) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the ignore option in git-config(1) or gitmodules(5). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --default-prefix Use the default source and destination prefixes ("a/" and "b/"). This is usually the default already, but may be used to override config such as diff.noprefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with --ita-visible-in-index. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also gitdiffcore(7). -1 --base, -2 --ours, -3 --theirs Compare the working tree with the "base" version (stage #1), "our branch" (stage #2) or "their branch" (stage #3). The index contains these stages only for unmerged entries i.e. while resolving conflicts. See git-read-tree(1) section "3-Way Merge" for detailed information. -0 Omit diff output for unmerged entries and just show "Unmerged". Can be used only when comparing the working tree with the index. <path>... The <paths> parameters, when given, are used to limit the diff to the named paths (you can give directory names and get diff for all files under them). RAW OUTPUT FORMAT top The raw output format from "git-diff-index", "git-diff-tree", "git-diff-files" and "git diff --raw" are very similar. These commands all compare two sets of things; what is compared differs: git-diff-index <tree-ish> compares the <tree-ish> and the files on the filesystem. git-diff-index --cached <tree-ish> compares the <tree-ish> and the index. git-diff-tree [-r] <tree-ish-1> <tree-ish-2> [<pattern>...] compares the trees named by the two arguments. git-diff-files [<pattern>...] compares the index and the files on the filesystem. The "git-diff-tree" command begins its output by printing the hash of what is being compared. After that, all the commands print one output line per changed file. An output line is formatted this way: in-place edit :100644 100644 bcd1234 0123456 M file0 copy-edit :100644 100644 abcd123 1234567 C68 file1 file2 rename-edit :100644 100644 abcd123 1234567 R86 file1 file3 create :000000 100644 0000000 1234567 A file4 delete :100644 000000 1234567 0000000 D file5 unmerged :000000 000000 0000000 0000000 U file6 That is, from the left to the right: 1. a colon. 2. mode for "src"; 000000 if creation or unmerged. 3. a space. 4. mode for "dst"; 000000 if deletion or unmerged. 5. a space. 6. sha1 for "src"; 0{40} if creation or unmerged. 7. a space. 8. sha1 for "dst"; 0{40} if deletion, unmerged or "work tree out of sync with the index". 9. a space. 10. status, followed by optional "score" number. 11. a tab or a NUL when -z option is used. 12. path for "src" 13. a tab or a NUL when -z option is used; only exists for C or R. 14. path for "dst"; only exists for C or R. 15. an LF or a NUL when -z option is used, to terminate the record. Possible status letters are: A: addition of a file C: copy of a file into a new one D: deletion of a file M: modification of the contents or mode of a file R: renaming of a file T: change in the type of the file (regular file, symbolic link or submodule) U: file is unmerged (you must complete the merge before it can be committed) X: "unknown" change type (most probably a bug, please report it) Status letters C and R are always followed by a score (denoting the percentage of similarity between the source and target of the move or copy). Status letter M may be followed by a score (denoting the percentage of dissimilarity) for file rewrites. The sha1 for "dst" is shown as all 0s if a file on the filesystem is out of sync with the index. Example: :100644 100644 5be4a4a 0000000 M file.c Without the -z option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). Using -z the filename is output verbatim and the line is terminated by a NUL byte. DIFF FORMAT FOR MERGES top "git-diff-tree", "git-diff-files" and "git-diff --raw" can take -c or --cc option to generate diff output also for merge commits. The output differs from the format described above in the following way: 1. there is a colon for each parent 2. there are more "src" modes and "src" sha1 3. status is concatenated status characters for each parent 4. no optional "score" number 5. tab-separated pathname(s) of the file For -c and --cc, only the destination or final path is shown even if the file was renamed on any side of history. With --combined-all-paths, the name of the path in each parent is shown followed by the name of the path in the merge commit. Examples for -c and --cc without --combined-all-paths: ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR phooey.c Examples when --combined-all-paths added to either -c or --cc: ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c desc.c desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM foo.sh bar.sh bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR fooey.c fuey.c phooey.c Note that combined diff lists only files which were modified from all parents. GENERATING PATCH TEXT WITH -P top Running git-diff(1), git-log(1), git-show(1), git-diff-index(1), git-diff-tree(1), or git-diff-files(1) with the -p option produces patch text. You can customize the creation of patch text via the GIT_EXTERNAL_DIFF and the GIT_DIFF_OPTS environment variables (see git(1)), and the diff attribute (see gitattributes(5)). What the -p option produces is slightly different from the traditional diff format: 1. It is preceded by a "git diff" header that looks like this: diff --git a/file1 b/file2 The a/ and b/ filenames are the same unless rename/copy is involved. Especially, even for a creation or a deletion, /dev/null is not used in place of the a/ or b/ filenames. When a rename/copy is involved, file1 and file2 show the name of the source file of the rename/copy and the name of the file that the rename/copy produces, respectively. 2. It is followed by one or more extended header lines: old mode <mode> new mode <mode> deleted file mode <mode> new file mode <mode> copy from <path> copy to <path> rename from <path> rename to <path> similarity index <number> dissimilarity index <number> index <hash>..<hash> <mode> File modes are printed as 6-digit octal numbers including the file type and file permission bits. Path names in extended headers do not include the a/ and b/ prefixes. The similarity index is the percentage of unchanged lines, and the dissimilarity index is the percentage of changed lines. It is a rounded down integer, followed by a percent sign. The similarity index value of 100% is thus reserved for two equal files, while 100% dissimilarity means that no line from the old file made it into the new one. The index line includes the blob object names before and after the change. The <mode> is included if the file mode does not change; otherwise, separate lines indicate the old and the new mode. 3. Pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). 4. All the file1 files in the output refer to files before the commit, and all the file2 files refer to files after the commit. It is incorrect to apply each change to each file sequentially. For example, this patch will swap a and b: diff --git a/a b/b rename from a rename to b diff --git a/b b/a rename from b rename to a 5. Hunk headers mention the name of the function to which the hunk applies. See "Defining a custom hunk-header" in gitattributes(5) for details of how to tailor this to specific languages. COMBINED DIFF FORMAT top Any diff-generating command can take the -c or --cc option to produce a combined diff when showing a merge. This is the default format when showing merges with git-diff(1) or git-show(1). Note also that you can give suitable --diff-merges option to any of these commands to force generation of diffs in a specific format. A "combined diff" format looks like this: diff --combined describe.c index fabadb8,cc95eb0..4866510 --- a/describe.c +++ b/describe.c @@@ -98,20 -98,12 +98,20 @@@ return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1; } - static void describe(char *arg) -static void describe(struct commit *cmit, int last_one) ++static void describe(char *arg, int last_one) { + unsigned char sha1[20]; + struct commit *cmit; struct commit_list *list; static int initialized = 0; struct commit_name *n; + if (get_sha1(arg, sha1) < 0) + usage(describe_usage); + cmit = lookup_commit_reference(sha1); + if (!cmit) + usage(describe_usage); + if (!initialized) { initialized = 1; for_each_ref(get_name); 1. It is preceded by a "git diff" header, that looks like this (when the -c option is used): diff --combined file or like this (when the --cc option is used): diff --cc file 2. It is followed by one or more extended header lines (this example shows a merge with two parents): index <hash>,<hash>..<hash> mode <mode>,<mode>..<mode> new file mode <mode> deleted file mode <mode>,<mode> The mode <mode>,<mode>..<mode> line appears only if at least one of the <mode> is different from the rest. Extended headers with information about detected content movement (renames and copying detection) are designed to work with the diff of two <tree-ish> and are not used by combined diff format. 3. It is followed by a two-line from-file/to-file header: --- a/file +++ b/file Similar to the two-line header for the traditional unified diff format, /dev/null is used to signal created or deleted files. However, if the --combined-all-paths option is provided, instead of a two-line from-file/to-file, you get an N+1 line from-file/to-file header, where N is the number of parents in the merge commit: --- a/file --- a/file --- a/file +++ b/file This extended format can be useful if rename or copy detection is active, to allow you to see the original name of the file in different parents. 4. Chunk header format is modified to prevent people from accidentally feeding it to patch -p1. Combined diff format was created for review of merge commit changes, and was not meant to be applied. The change is similar to the change in the extended index header: @@@ <from-file-range> <from-file-range> <to-file-range> @@@ There are (number of parents + 1) @ characters in the chunk header for combined diff format. Unlike the traditional unified diff format, which shows two files A and B with a single column that has - (minus appears in A but removed in B), + (plus missing in A but added to B), or " " (space unchanged) prefix, this format compares two or more files file1, file2,... with one file X, and shows how X differs from each of fileN. One column for each of fileN is prepended to the output line to note how Xs line is different from it. A - character in the column N means that the line appears in fileN but it does not appear in the result. A + character in the column N means that the line appears in the result, and fileN does not have that line (in other words, the line was added, from the point of view of that parent). In the above example output, the function signature was changed from both files (hence two - removals from both file1 and file2, plus ++ to mean one line that was added does not appear in either file1 or file2). Also, eight other lines are the same from file1 but do not appear in file2 (hence prefixed with +). When shown by git diff-tree -c, it compares the parents of a merge commit with the merge result (i.e. file1..fileN are the parents). When shown by git diff-files -c, it compares the two unresolved merge parents with the working tree file (i.e. file1 is stage 2 aka "our version", file2 is stage 3 aka "their version"). OTHER DIFF FORMATS top The --summary option describes newly added, deleted, renamed and copied files. The --stat option adds diffstat(1) graph to the output. These options can be combined with other options, such as -p, and are meant for human consumption. When showing a change that involves a rename or a copy, --stat output formats the pathnames compactly by combining common prefix and suffix of the pathnames. For example, a change that moves arch/i386/Makefile to arch/x86/Makefile while modifying 4 lines will be shown like this: arch/{i386 => x86}/Makefile | 4 +-- The --numstat option gives the diffstat(1) information but is designed for easier machine consumption. An entry in --numstat output looks like this: 1 2 README 3 1 arch/{i386 => x86}/Makefile That is, from left to right: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. pathname (possibly with rename/copy information); 6. a newline. When -z output option is in effect, the output is formatted this way: 1 2 README NUL 3 1 NUL arch/i386/Makefile NUL arch/x86/Makefile NUL That is: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. a NUL (only exists if renamed/copied); 6. pathname in preimage; 7. a NUL (only exists if renamed/copied); 8. pathname in postimage (only exists if renamed/copied); 9. a NUL. The extra NUL before the preimage path in renamed case is to allow scripts that read the output to tell if the current record being read is a single-path record or a rename/copy record without reading ahead. After reading added and deleted lines, reading up to NUL would yield the pathname, but if that is NUL, the record will show two paths. EXAMPLES top Various ways to check your working tree $ git diff (1) $ git diff --cached (2) $ git diff HEAD (3) $ git diff AUTO_MERGE (4) 1. Changes in the working tree not yet staged for the next commit. 2. Changes between the index and your last commit; what you would be committing if you run git commit without -a option. 3. Changes in the working tree since your last commit; what you would be committing if you run git commit -a 4. Changes in the working tree youve made to resolve textual conflicts so far. Comparing with arbitrary commits $ git diff test (1) $ git diff HEAD -- ./test (2) $ git diff HEAD^ HEAD (3) 1. Instead of using the tip of the current branch, compare with the tip of "test" branch. 2. Instead of comparing with the tip of "test" branch, compare with the tip of the current branch, but limit the comparison to the file "test". 3. Compare the version before the last commit and the last commit. Comparing branches $ git diff topic master (1) $ git diff topic..master (2) $ git diff topic...master (3) 1. Changes between the tips of the topic and the master branches. 2. Same as above. 3. Changes that occurred on the master branch since when the topic branch was started off it. Limiting the diff output $ git diff --diff-filter=MRC (1) $ git diff --name-status (2) $ git diff arch/i386 include/asm-i386 (3) 1. Show only modification, rename, and copy, but not addition or deletion. 2. Show only names and the nature of change, but not actual diff output. 3. Limit diff output to named subtrees. Munging the diff output $ git diff --find-copies-harder -B -C (1) $ git diff -R (2) 1. Spend extra cycles to find renames, copies and complete rewrites (very expensive). 2. Output diff in reverse. CONFIGURATION top Everything below this line in this section is selectively included from the git-config(1) documentation. The content is the same as whats found there: diff.autoRefreshIndex When using git diff to compare with work tree files, do not consider stat-only changes as changed. Instead, silently run git update-index --refresh to update the cached stat information for paths whose contents in the work tree match the contents in the index. This option defaults to true. Note that this affects only git diff Porcelain, and not lower level diff commands such as git diff-files. diff.dirstat A comma separated list of --dirstat parameters specifying the default behavior of the --dirstat option to git-diff(1) and friends. The defaults can be overridden on the command line (using --dirstat=<param1,param2,...>). The fallback defaults (when not changed by diff.dirstat) are changes,noncumulative,3. The following parameters are available: changes Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. lines Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive --dirstat behavior than the changes behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other --*stat options. files Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest --dirstat behavior, since it does not have to look at the file contents at all. cumulative Count changes in a child directory for the parent directory as well. Note that when using cumulative, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the noncumulative parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: files,10,cumulative. diff.statNameWidth Limit the width of the filename part in --stat output. If set, applies to all commands generating --stat output except format-patch. diff.statGraphWidth Limit the width of the graph part in --stat output. If set, applies to all commands generating --stat output except format-patch. diff.context Generate diffs with <n> lines of context instead of the default of 3. This value is overridden by the -U option. diff.interHunkContext Show the context between diff hunks, up to the specified number of lines, thereby fusing the hunks that are close to each other. This value serves as the default for the --inter-hunk-context command line option. diff.external If this config variable is set, diff generation is not performed using the internal diff machinery, but using the given command. Can be overridden with the GIT_EXTERNAL_DIFF environment variable. The command is called with parameters as described under "git Diffs" in git(1). Note: if you want to use an external diff program only on a subset of your files, you might want to use gitattributes(5) instead. diff.ignoreSubmodules Sets the default value of --ignore-submodules. Note that this affects only git diff Porcelain, and not lower level diff commands such as git diff-files. git checkout and git switch also honor this setting when reporting uncommitted changes. Setting it to all disables the submodule summary normally shown by git commit and git status when status.submoduleSummary is set unless it is overridden by using the --ignore-submodules command-line option. The git submodule commands are not affected by this setting. By default this is set to untracked so that any untracked submodules are ignored. diff.mnemonicPrefix If set, git diff uses a prefix pair that is different from the standard "a/" and "b/" depending on what is being compared. When this configuration is in effect, reverse diff output also swaps the order of the prefixes: git diff compares the (i)ndex and the (w)ork tree; git diff HEAD compares a (c)ommit and the (w)ork tree; git diff --cached compares a (c)ommit and the (i)ndex; git diff HEAD:file1 file2 compares an (o)bject and a (w)ork tree entity; git diff --no-index a b compares two non-git things (1) and (2). diff.noprefix If set, git diff does not show any source or destination prefix. diff.relative If set to true, git diff does not show changes outside of the directory and show pathnames relative to the current directory. diff.orderFile File indicating how to order files within a diff. See the -O option to git-diff(1) for details. If diff.orderFile is a relative pathname, it is treated as relative to the top of the working tree. diff.renameLimit The number of files to consider in the exhaustive portion of copy/rename detection; equivalent to the git diff option -l. If not set, the default value is currently 1000. This setting has no effect if rename detection is turned off. diff.renames Whether and how Git detects renames. If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. If set to "copies" or "copy", Git will detect copies, as well. Defaults to true. Note that this affects only git diff Porcelain like git-diff(1) and git-log(1), and not lower level commands such as git-diff-files(1). diff.suppressBlankEmpty A boolean to inhibit the standard behavior of printing a space before each empty output line. Defaults to false. diff.submodule Specify the format in which differences in submodules are shown. The "short" format just shows the names of the commits at the beginning and end of the range. The "log" format lists the commits in the range like git-submodule(1) summary does. The "diff" format shows an inline diff of the changed contents of the submodule. Defaults to "short". diff.wordRegex A POSIX Extended Regular Expression used to determine what is a "word" when performing word-by-word difference calculations. Character sequences that match the regular expression are "words", all other characters are ignorable whitespace. diff.<driver>.command The custom diff driver command. See gitattributes(5) for details. diff.<driver>.xfuncname The regular expression that the diff driver should use to recognize the hunk header. A built-in pattern may also be used. See gitattributes(5) for details. diff.<driver>.binary Set this option to true to make the diff driver treat files as binary. See gitattributes(5) for details. diff.<driver>.textconv The command that the diff driver should call to generate the text-converted version of a file. The result of the conversion is used to generate a human-readable diff. See gitattributes(5) for details. diff.<driver>.wordRegex The regular expression that the diff driver should use to split words in a line. See gitattributes(5) for details. diff.<driver>.cachetextconv Set this option to true to make the diff driver cache the text conversion outputs. See gitattributes(5) for details. araxis Use Araxis Merge (requires a graphical session) bc Use Beyond Compare (requires a graphical session) bc3 Use Beyond Compare (requires a graphical session) bc4 Use Beyond Compare (requires a graphical session) codecompare Use Code Compare (requires a graphical session) deltawalker Use DeltaWalker (requires a graphical session) diffmerge Use DiffMerge (requires a graphical session) diffuse Use Diffuse (requires a graphical session) ecmerge Use ECMerge (requires a graphical session) emerge Use Emacs' Emerge examdiff Use ExamDiff Pro (requires a graphical session) guiffy Use Guiffys Diff Tool (requires a graphical session) gvimdiff Use gVim (requires a graphical session) kdiff3 Use KDiff3 (requires a graphical session) kompare Use Kompare (requires a graphical session) meld Use Meld (requires a graphical session) nvimdiff Use Neovim opendiff Use FileMerge (requires a graphical session) p4merge Use HelixCore P4Merge (requires a graphical session) smerge Use Sublime Merge (requires a graphical session) tkdiff Use TkDiff (requires a graphical session) vimdiff Use Vim winmerge Use WinMerge (requires a graphical session) xxdiff Use xxdiff (requires a graphical session) diff.indentHeuristic Set this option to false to disable the default heuristics that shift diff hunk boundaries to make patches easier to read. diff.algorithm Choose a diff algorithm. The variants are as follows: default, myers The basic greedy diff algorithm. Currently, this is the default. minimal Spend extra time to make sure the smallest possible diff is produced. patience Use "patience diff" algorithm when generating patches. histogram This algorithm extends the patience algorithm to "support low-occurrence common elements". diff.wsErrorHighlight Highlight whitespace errors in the context, old or new lines of the diff. Multiple values are separated by comma, none resets previous values, default reset the list to new and all is a shorthand for old,new,context. The whitespace errors are colored with color.diff.whitespace. The command line option --ws-error-highlight=<kind> overrides this setting. diff.colorMoved If set to either a valid <mode> or a true value, moved lines in a diff are colored differently, for details of valid modes see --color-moved in git-diff(1). If simply set to true the default color mode will be used. When set to false, moved lines are not colored. diff.colorMovedWS When moved lines are colored using e.g. the diff.colorMoved setting, this option controls the <mode> how spaces are treated for details of valid modes see --color-moved-ws in git-diff(1). SEE ALSO top diff(1), git-difftool(1), git-log(1), gitdiffcore(7), git-format-patch(1), git-apply(1), git-show(1) GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-DIFF(1) Pages that refer to this page: git(1), git-config(1), git-diff(1), git-diff-files(1), git-diff-index(1), git-difftool(1), git-diff-tree(1), git-fast-export(1), git-format-patch(1), git-log(1), git-ls-files(1), git-merge(1), git-merge-file(1), git-p4(1), git-pull(1), git-range-diff(1), git-rebase(1), git-show(1), git-status(1), git-submodule(1), stg-diff(1), stg-export(1), stg-patches(1), stg-show(1), gitdiffcore(7), giteveryday(7), gitfaq(7), gitglossary(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git diff\n\n> Show changes to tracked files.\n> More information: <https://git-scm.com/docs/git-diff>.\n\n- Show unstaged changes:\n\n`git diff`\n\n- Show all uncommitted changes (including staged ones):\n\n`git diff HEAD`\n\n- Show only staged (added, but not yet committed) changes:\n\n`git diff --staged`\n\n- Show changes from all commits since a given date/time (a date expression, e.g. "1 week 2 days" or an ISO date):\n\n`git diff 'HEAD@{3 months|weeks|days|hours|seconds ago}'`\n\n- Show only names of changed files since a given commit:\n\n`git diff --name-only {{commit}}`\n\n- Output a summary of file creations, renames and mode changes since a given commit:\n\n`git diff --summary {{commit}}`\n\n- Compare a single file between two branches or commits:\n\n`git diff {{branch_1}}..{{branch_2}} [--] {{path/to/file}}`\n\n- Compare different files from the current branch to other branch:\n\n`git diff {{branch}}:{{path/to/file2}} {{path/to/file}}`\n
git-diff-files
git-diff-files(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training git-diff-files(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | RAW OUTPUT FORMAT | DIFF FORMAT FOR MERGES | GENERATING PATCH TEXT WITH -P | COMBINED DIFF FORMAT | OTHER DIFF FORMATS | GIT | COLOPHON GIT-DIFF-FILES(1) Git Manual GIT-DIFF-FILES(1) NAME top git-diff-files - Compares files in the working tree and the index SYNOPSIS top git diff-files [-q] [-0 | -1 | -2 | -3 | -c | --cc] [<common-diff-options>] [<path>...] DESCRIPTION top Compares the files in the working tree and the index. When paths are specified, compares only those named paths. Otherwise all entries in the index are compared. The output format is the same as for git diff-index and git diff-tree. OPTIONS top -p, -u, --patch Generate patch (see the section called GENERATING PATCH TEXT WITH -P). -s, --no-patch Suppress all output from the diff machinery. Useful for commands like git show that show the patch by default to squelch their output, or to cancel the effect of options like --patch, --stat earlier on the command line in an alias. -U<n>, --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies --patch. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char>, --output-indicator-old=<char>, --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are +, - and ' ' respectively. --raw Generate the diff in raw format. This is the default. --patch-with-raw Synonym for -p --raw. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: default, myers The basic greedy diff algorithm. Currently, this is the default. minimal Spend extra time to make sure the smallest possible diff is produced. patience Use "patience diff" algorithm when generating patches. histogram This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the diff.algorithm variable to a non-default value and want to use the default one, then you have to use --diff-algorithm=default option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by <width>. The width of the filename part can be limited by giving another width <name-width> after a comma or by setting diff.statNameWidth=<width>. The width of the graph part can be limited by using --stat-graph-width=<width> or by setting diff.statGraphWidth=<width>. Using --stat or --stat-graph-width affects all commands generating a stat graph, while setting diff.statNameWidth or diff.statGraphWidth does not affect git format-patch. By giving a third parameter <count>, you can limit the output to the first <count> lines, followed by ... if there are more. These parameters can also be set individually with --stat-width=<width>, --stat-name-width=<name-width> and --stat-count=<count>. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if its a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies --stat. --numstat Similar to --stat, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two - instead of saying 0 0. --shortstat Output only the last line of the --stat format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,...>], --dirstat[=<param1,param2,...>] Output the distribution of relative amount of changes for each sub-directory. The behavior of --dirstat can be customized by passing it a comma separated list of parameters. The defaults are controlled by the diff.dirstat configuration variable (see git-config(1)). The following parameters are available: changes Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. lines Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive --dirstat behavior than the changes behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other --*stat options. files Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest --dirstat behavior, since it does not have to look at the file contents at all. cumulative Count changes in a child directory for the parent directory as well. Note that when using cumulative, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the noncumulative parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: --dirstat=files,10,cumulative. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>...] Synonym for --dirstat=files,param1,param2... --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for -p --stat. -z When --raw, --numstat, --name-only or --name-status has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the git-log(1) manual page. --name-status Show only names and status of changed files. See the description of the --diff-filter option on what the status letters mean. Just like --name-only the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying --submodule=short the short format is used. This format just shows the names of the commits at the beginning and end of the range. When --submodule or --submodule=log is specified, the log format is used. This format lists the commits in the range like git-submodule(1) summary does. When --submodule=diff is specified, the diff format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to diff.submodule or the short format if the config option is unset. --color[=<when>] Show colored diff. --color (i.e. without =<when>) is the same as --color=always. <when> can be one of always, never, or auto. --no-color Turn off colored diff. It is the same as --color=never. --color-moved[=<mode>] Moved lines of code are colored differently. The <mode> defaults to no if the option is not given and to zebra if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for zebra. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with color.diff.newMoved. Similarly color.diff.oldMoved will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the color.diff.{old,new}Moved color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in blocks mode. The blocks are painted using either the color.diff.{old,new}Moved color or color.diff.{old,new}MovedAlternative. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to zebra, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. dimmed_zebra is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as --color-moved=no. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for --color-moved. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as --color-moved-ws=no. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see --word-diff-regex below. The <mode> defaults to plain, and must be one of: color Highlight changed words using only colors. Implies --color. plain Show words as [-removed-] and {+added+}. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a +/-/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde ~ on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies --word-diff unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append |[^[:space:]] to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, --word-diff-regex=. will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see gitattributes(5) or git-config(1). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to --word-diff=color plus (if a regex was specified) --word-diff-regex=<regex>. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by core.whitespace configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the context, old or new lines of the diff. Multiple values are separated by comma, none resets previous values, default reset the list to new and all is a shorthand for old,new,context. When this option is not given, and the configuration variable diff.wsErrorHighlight is not set, only whitespace errors in new lines are highlighted. The whitespace errors are colored with color.diff.whitespace. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to --full-index, output a binary diff that can be applied with git-apply. Implies --patch. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. In diff-patch output format, --full-index takes higher precedence, i.e. if --full-index is specified, full blob names will be shown regardless of --abbrev. Non default number of digits can be specified with --abbrev=<n>. -B[<n>][/<m>], --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number m controls this aspect of the -B option (defaults to 60%). -B/70% specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number n controls this aspect of the -B option (defaults to 50%). -B20% specifies that a change with addition and deletion compared to 20% or more of the files size are eligible for being picked up as a possible source of a rename to another file. -M[<n>], --find-renames[=<n>] Detect renames. If n is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the files size). For example, -M90% means Git should consider a delete/add pair to be a rename if more than 90% of the file hasnt changed. Without a % sign, the number is to be read as a fraction, with a decimal point before it. I.e., -M5 becomes 0.5, and is thus the same as -M50%. Similarly, -M05 is the same as -M5%. To limit detection to exact renames, use -M100%. The default similarity index is 50%. -C[<n>], --find-copies[=<n>] Detect copies as well as renames. See also --find-copies-harder. If n is specified, it has the same meaning as for -M<n>. --find-copies-harder For performance reasons, by default, -C option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one -C option has the same effect. -D, --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and /dev/null. The resulting patch is not meant to be applied with patch or git apply; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with -B, omit also the preimage in the deletion part of a delete/create pair. -l<num> The -M and -C options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)...[*]] Select only files that are Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R), have their type (i.e. regular file, symlink, submodule, ...) changed (T), are Unmerged (U), are Unknown (X), or have had their pairing Broken (B). Any combination of the filter characters (including none) can be used. When * (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. --diff-filter=ad excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripters use. It is useful when youre looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into -S, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between -S<regex> --pickaxe-regex and -G<regex>, consider a commit with the following diff in the same file: + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); While git log -G"frotz\(nitfol" will show this commit, git log -S"frotz\(nitfol" --pickaxe-regex will not (because the number of occurrences of that string did not change). Unless --text is supplied patches of binary files without a textconv filter will be ignored. See the pickaxe entry in gitdiffcore(7) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to -S, just the argument is different in that it doesnt search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the -t option in git-log to also find trees. --pickaxe-all When -S or -G finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to -S as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the diff.orderFile configuration variable (see git-config(1)). To cancel diff.orderFile, use -O/dev/null. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: Blank lines are ignored, so they can be used as separators for readability. Lines starting with a hash ("#") are ignored, so they can be used for comments. Add a backslash ("\") to the beginning of the pattern if it starts with a hash. Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "foo*bar" matches "fooasdfbar" and "foo/bar/baz/asdf" but not "foobarx". --skip-to=<file>, --rotate-to=<file> Discard the files before the named <file> from the output (i.e. skip to), or move them to the end of the output (i.e. rotate to). These options were invented primarily for the use of the git difftool command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>], --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. --no-relative can be used to countermand both diff.relative config option and previous --relative. -a, --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b, --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w, --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex>, --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to diff.interHunkContext or 0 if the config option is unset. -W, --function-context Show whole function as context lines for each change. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). --exit-code Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences. --quiet Disable all output of the program. Implies --exit-code. --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with gitattributes(5), you need to use this option with git-log(1) and friends. --no-ext-diff Disallow external diff drivers. --textconv, --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See gitattributes(5) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for git-diff(1) and git-log(1), but not for git-format-patch(1) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the ignore option in git-config(1) or gitmodules(5). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --default-prefix Use the default source and destination prefixes ("a/" and "b/"). This is usually the default already, but may be used to override config such as diff.noprefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with --ita-visible-in-index. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also gitdiffcore(7). -1 --base, -2 --ours, -3 --theirs, -0 Diff against the "base" version, "our branch", or "their branch" respectively. With these options, diffs for merged entries are not shown. The default is to diff against our branch (-2) and the cleanly resolved paths. The option -0 can be given to omit diff output for unmerged entries and just show "Unmerged". -c, --cc This compares stage 2 (our branch), stage 3 (their branch), and the working tree file and outputs a combined diff, similar to the way diff-tree shows a merge commit with these flags. -q Remain silent even for nonexistent files RAW OUTPUT FORMAT top The raw output format from "git-diff-index", "git-diff-tree", "git-diff-files" and "git diff --raw" are very similar. These commands all compare two sets of things; what is compared differs: git-diff-index <tree-ish> compares the <tree-ish> and the files on the filesystem. git-diff-index --cached <tree-ish> compares the <tree-ish> and the index. git-diff-tree [-r] <tree-ish-1> <tree-ish-2> [<pattern>...] compares the trees named by the two arguments. git-diff-files [<pattern>...] compares the index and the files on the filesystem. The "git-diff-tree" command begins its output by printing the hash of what is being compared. After that, all the commands print one output line per changed file. An output line is formatted this way: in-place edit :100644 100644 bcd1234 0123456 M file0 copy-edit :100644 100644 abcd123 1234567 C68 file1 file2 rename-edit :100644 100644 abcd123 1234567 R86 file1 file3 create :000000 100644 0000000 1234567 A file4 delete :100644 000000 1234567 0000000 D file5 unmerged :000000 000000 0000000 0000000 U file6 That is, from the left to the right: 1. a colon. 2. mode for "src"; 000000 if creation or unmerged. 3. a space. 4. mode for "dst"; 000000 if deletion or unmerged. 5. a space. 6. sha1 for "src"; 0{40} if creation or unmerged. 7. a space. 8. sha1 for "dst"; 0{40} if deletion, unmerged or "work tree out of sync with the index". 9. a space. 10. status, followed by optional "score" number. 11. a tab or a NUL when -z option is used. 12. path for "src" 13. a tab or a NUL when -z option is used; only exists for C or R. 14. path for "dst"; only exists for C or R. 15. an LF or a NUL when -z option is used, to terminate the record. Possible status letters are: A: addition of a file C: copy of a file into a new one D: deletion of a file M: modification of the contents or mode of a file R: renaming of a file T: change in the type of the file (regular file, symbolic link or submodule) U: file is unmerged (you must complete the merge before it can be committed) X: "unknown" change type (most probably a bug, please report it) Status letters C and R are always followed by a score (denoting the percentage of similarity between the source and target of the move or copy). Status letter M may be followed by a score (denoting the percentage of dissimilarity) for file rewrites. The sha1 for "dst" is shown as all 0s if a file on the filesystem is out of sync with the index. Example: :100644 100644 5be4a4a 0000000 M file.c Without the -z option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). Using -z the filename is output verbatim and the line is terminated by a NUL byte. DIFF FORMAT FOR MERGES top "git-diff-tree", "git-diff-files" and "git-diff --raw" can take -c or --cc option to generate diff output also for merge commits. The output differs from the format described above in the following way: 1. there is a colon for each parent 2. there are more "src" modes and "src" sha1 3. status is concatenated status characters for each parent 4. no optional "score" number 5. tab-separated pathname(s) of the file For -c and --cc, only the destination or final path is shown even if the file was renamed on any side of history. With --combined-all-paths, the name of the path in each parent is shown followed by the name of the path in the merge commit. Examples for -c and --cc without --combined-all-paths: ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR phooey.c Examples when --combined-all-paths added to either -c or --cc: ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c desc.c desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM foo.sh bar.sh bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR fooey.c fuey.c phooey.c Note that combined diff lists only files which were modified from all parents. GENERATING PATCH TEXT WITH -P top Running git-diff(1), git-log(1), git-show(1), git-diff-index(1), git-diff-tree(1), or git-diff-files(1) with the -p option produces patch text. You can customize the creation of patch text via the GIT_EXTERNAL_DIFF and the GIT_DIFF_OPTS environment variables (see git(1)), and the diff attribute (see gitattributes(5)). What the -p option produces is slightly different from the traditional diff format: 1. It is preceded by a "git diff" header that looks like this: diff --git a/file1 b/file2 The a/ and b/ filenames are the same unless rename/copy is involved. Especially, even for a creation or a deletion, /dev/null is not used in place of the a/ or b/ filenames. When a rename/copy is involved, file1 and file2 show the name of the source file of the rename/copy and the name of the file that the rename/copy produces, respectively. 2. It is followed by one or more extended header lines: old mode <mode> new mode <mode> deleted file mode <mode> new file mode <mode> copy from <path> copy to <path> rename from <path> rename to <path> similarity index <number> dissimilarity index <number> index <hash>..<hash> <mode> File modes are printed as 6-digit octal numbers including the file type and file permission bits. Path names in extended headers do not include the a/ and b/ prefixes. The similarity index is the percentage of unchanged lines, and the dissimilarity index is the percentage of changed lines. It is a rounded down integer, followed by a percent sign. The similarity index value of 100% is thus reserved for two equal files, while 100% dissimilarity means that no line from the old file made it into the new one. The index line includes the blob object names before and after the change. The <mode> is included if the file mode does not change; otherwise, separate lines indicate the old and the new mode. 3. Pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). 4. All the file1 files in the output refer to files before the commit, and all the file2 files refer to files after the commit. It is incorrect to apply each change to each file sequentially. For example, this patch will swap a and b: diff --git a/a b/b rename from a rename to b diff --git a/b b/a rename from b rename to a 5. Hunk headers mention the name of the function to which the hunk applies. See "Defining a custom hunk-header" in gitattributes(5) for details of how to tailor this to specific languages. COMBINED DIFF FORMAT top Any diff-generating command can take the -c or --cc option to produce a combined diff when showing a merge. This is the default format when showing merges with git-diff(1) or git-show(1). Note also that you can give suitable --diff-merges option to any of these commands to force generation of diffs in a specific format. A "combined diff" format looks like this: diff --combined describe.c index fabadb8,cc95eb0..4866510 --- a/describe.c +++ b/describe.c @@@ -98,20 -98,12 +98,20 @@@ return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1; } - static void describe(char *arg) -static void describe(struct commit *cmit, int last_one) ++static void describe(char *arg, int last_one) { + unsigned char sha1[20]; + struct commit *cmit; struct commit_list *list; static int initialized = 0; struct commit_name *n; + if (get_sha1(arg, sha1) < 0) + usage(describe_usage); + cmit = lookup_commit_reference(sha1); + if (!cmit) + usage(describe_usage); + if (!initialized) { initialized = 1; for_each_ref(get_name); 1. It is preceded by a "git diff" header, that looks like this (when the -c option is used): diff --combined file or like this (when the --cc option is used): diff --cc file 2. It is followed by one or more extended header lines (this example shows a merge with two parents): index <hash>,<hash>..<hash> mode <mode>,<mode>..<mode> new file mode <mode> deleted file mode <mode>,<mode> The mode <mode>,<mode>..<mode> line appears only if at least one of the <mode> is different from the rest. Extended headers with information about detected content movement (renames and copying detection) are designed to work with the diff of two <tree-ish> and are not used by combined diff format. 3. It is followed by a two-line from-file/to-file header: --- a/file +++ b/file Similar to the two-line header for the traditional unified diff format, /dev/null is used to signal created or deleted files. However, if the --combined-all-paths option is provided, instead of a two-line from-file/to-file, you get an N+1 line from-file/to-file header, where N is the number of parents in the merge commit: --- a/file --- a/file --- a/file +++ b/file This extended format can be useful if rename or copy detection is active, to allow you to see the original name of the file in different parents. 4. Chunk header format is modified to prevent people from accidentally feeding it to patch -p1. Combined diff format was created for review of merge commit changes, and was not meant to be applied. The change is similar to the change in the extended index header: @@@ <from-file-range> <from-file-range> <to-file-range> @@@ There are (number of parents + 1) @ characters in the chunk header for combined diff format. Unlike the traditional unified diff format, which shows two files A and B with a single column that has - (minus appears in A but removed in B), + (plus missing in A but added to B), or " " (space unchanged) prefix, this format compares two or more files file1, file2,... with one file X, and shows how X differs from each of fileN. One column for each of fileN is prepended to the output line to note how Xs line is different from it. A - character in the column N means that the line appears in fileN but it does not appear in the result. A + character in the column N means that the line appears in the result, and fileN does not have that line (in other words, the line was added, from the point of view of that parent). In the above example output, the function signature was changed from both files (hence two - removals from both file1 and file2, plus ++ to mean one line that was added does not appear in either file1 or file2). Also, eight other lines are the same from file1 but do not appear in file2 (hence prefixed with +). When shown by git diff-tree -c, it compares the parents of a merge commit with the merge result (i.e. file1..fileN are the parents). When shown by git diff-files -c, it compares the two unresolved merge parents with the working tree file (i.e. file1 is stage 2 aka "our version", file2 is stage 3 aka "their version"). OTHER DIFF FORMATS top The --summary option describes newly added, deleted, renamed and copied files. The --stat option adds diffstat(1) graph to the output. These options can be combined with other options, such as -p, and are meant for human consumption. When showing a change that involves a rename or a copy, --stat output formats the pathnames compactly by combining common prefix and suffix of the pathnames. For example, a change that moves arch/i386/Makefile to arch/x86/Makefile while modifying 4 lines will be shown like this: arch/{i386 => x86}/Makefile | 4 +-- The --numstat option gives the diffstat(1) information but is designed for easier machine consumption. An entry in --numstat output looks like this: 1 2 README 3 1 arch/{i386 => x86}/Makefile That is, from left to right: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. pathname (possibly with rename/copy information); 6. a newline. When -z output option is in effect, the output is formatted this way: 1 2 README NUL 3 1 NUL arch/i386/Makefile NUL arch/x86/Makefile NUL That is: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. a NUL (only exists if renamed/copied); 6. pathname in preimage; 7. a NUL (only exists if renamed/copied); 8. pathname in postimage (only exists if renamed/copied); 9. a NUL. The extra NUL before the preimage path in renamed case is to allow scripts that read the output to tell if the current record being read is a single-path record or a rename/copy record without reading ahead. After reading added and deleted lines, reading up to NUL would yield the pathname, but if that is NUL, the record will show two paths. GIT top Part of the git(1) suite COLOPHON top This page is part of the git (Git distributed version control system) project. Information about the project can be found at http://git-scm.com/. If you have a bug report for this manual page, see http://git-scm.com/community. This page was obtained from the project's upstream Git repository https://github.com/git/git.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Git 2.43.0.174.g055bb6 2023-12-20 GIT-DIFF-FILES(1) Pages that refer to this page: git(1), git-config(1), git-diff(1), git-diff-files(1), git-diff-index(1), git-diff-tree(1), git-log(1), git-ls-files(1), git-show(1), gitdiffcore(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# git diff-files\n\n> Compare files using their sha1 hashes and modes.\n> More information: <https://git-scm.com/docs/git-diff-files>.\n\n- Compare all changed files:\n\n`git diff-files`\n\n- Compare only specified files:\n\n`git diff-files {{path/to/file}}`\n\n- Show only the names of changed files:\n\n`git diff-files --name-only`\n\n- Output a summary of extended header information:\n\n`git diff-files --summary`\n